It has been funny to watch people’s attitudes on copyright change ever since ChatGPT blew up. All I used to hear and experience was copyright used by corporations to shut down open source projects threatening their business models, but now it is the savior of the little guy who is a victim of flagrant corporate violators. In the background, the wealthy and powerful disregard all of this and seem to do whatever they want, and the little guy looks at millions of dollars in legal costs to defend themselves in either case. Costs that are increasingly a rounding error to their opposition as they continue to grow by exploiting a broken system, and the “little guy” now includes whole industries.
I feel like adversarial interoperability more than free market capitalism should have been the death knell for most of the negatives highlighted in this post. Everyone is still so determined to make money from mere ideas however that we still use 1700s law designed to protect book publishers to enable the existence of “businesses” so warped in valuation that they are now trillion dollar entities yet always face the existential threat of copy+paste. What if the more profound truth is that tech is beneficial to humanity but inherently worthless to sell, and that our present woe’s shape is determined by the antiquated institutions built service this illusion of value? In an inevitable future age of generative AI as an accessible technology, as opposed to a business model with a moat, what even is our goal for such institutions? What sorts of creativity do we want motivate, and what meaningful regulatory constraints even are there to begin with? I hope we figure it out soon, because IP will be impossible to enforce post-deglobalization in any case.
Think it's just the hypocrisy. Either copyright for everybody or copyright for nobody is much more defensible than the current state of affairs, where infringing copyright is legal as long as you're rich. Some random guy in Nebraska had to pay $250,000 to a music company for downloading one MP3, but OpenAI can download all music that ever existed and pay nothing. Meanwhile they prosecute "Anna" who did the exact same thing, because "Anna" isn't politically well-connected.
> where infringing copyright is legal as long as you're rich.
This isn’t true. A rich person and a poor person can train LLMs on copyrighted material in 2026. How they acquired those materials matters. Wealthy corporations hold no legal advantage in this space. For example, Anthropic recently settled for $1.5 billion due to acquiring books via piracy: https://www.nytimes.com/2025/09/05/technology/anthropic-sett...
My understanding is that an individual could likely pirate the same books without paying a dime (not due to differing legal standards but simply due to the fact it would be hard to identify them in many jurisdictions). In a practical sense it seems corporations are held to a higher standard in this regard.
The discrepancy is that some people equate training a model with piracy even though they are not the same thing. This is typically due to intellectual laziness (refusal to understand the differences) or willful misrepresentation (due to being an ideologically opposed to generative AI). No need to make such a mistake here though.
Of course it's not the same thing -- it's way worse.
The piracy comes first, and it's exactly the same thing. GenAI Corp. can't train models on illicitly obtained media before illicitly obtaining said media. And that very thing is already what private individuals got and get sued for millions over.
The GenAI Corp., having gotten away with that unpunished, then goes on to commit further violations by commercially exploiting the media with neither a license to do so, nor any intentions to pay the rights-holders for their use.
By the media conglomerates' own math, these GenAI companies should all be drowning in lawsuits over kazillions of bajillions of dollars.
> The piracy comes first, and it's exactly the same thing. GenAI Corp. can't train models on illicitly obtained media before illicitly obtaining said media.
My contention is that this is not happening. Most generative AI companies do not source their training data from illegal torrents and the few that do are currently paying for it. Further, I suspect the companies that get away with it today are _smaller_ not larger.
Training data is typically sourced by scraping the publicly available web.
> Of course it's not the same thing -- it's way worse.
Setting aside your own moral standards here, we should at least be able to agree that from a legal standpoint training a model is not copyright infringement.
> A rich person and a poor person can train LLMs on copyrighted material in 2026.
Updating an old adage for the modern age:
“The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.”
― Anatole France
As others have said, it's not a change. There's no inconsistency in applying copyright to protect people. When Gigantic Company uses copyright to bully the little guy who isn't doing anything to materially harm Gigantic Company, that's bad. When AI steals the little guy's work, that's bad. They're both bad. That's consistent. It's also obvious that it's consistent - i.e. I don't believe people making the "AI copyright complaints are funny" quip are being honest. I believe they are simply engaging in petty social politics.
> It has been funny to watch people’s attitudes on copyright change ever since ChatGPT blew up.
I doubt many individuals actually changed their opinions. Just that a large crowd of previously-silent people decided AI is a threat to them and they can attack it on copyright grounds. The AI revolution is a great argument against copyright law. The US's lax enforcement means that the incredible, world-changing tech could be built before the luddites got organised to try and stop it. The productive path appears to be illegal, but they took it anyway and we're all the better off for it.
The reasons that jump out at me are that, as a society, we're setting up to produce a more stuff with less effort, provide higher quality advice to everyone at an absurdly low cost, revolutionise research and it looks like we're going to be able to get a step-change improvement in the quality of economic management which is huge in and of itself. The wins seem like they're going to be big.
> we're setting up to produce a more stuff with less effort
According to Jevon's paradox[0], this would lead to more consumption of resources. We're already straining at the limits of the Earth. Depletion and collapse won't be good for anyone.
> provide higher quality advice to everyone at an absurdly low cost
Given every LLM's propensity to hallucinate, the only quality advice is that which can be followed back to a human expert-vetted source. But we already have people who don't check sources and get bad advice.
> revolutionise research
Maybe, but AI is also being used in a mass spread of misinformation.
> a step-change improvement in the quality of economic management
I don't know exactly what you mean by this, but from what I'm seeing so far, this looks like it will massively increase wealth disparity, which is bad for most people.
>It has been funny to watch people’s attitudes on copyright change ever since ChatGPT blew up. All I used to hear and experience was copyright used by corporations to shut down open source projects threatening their business models, but now it is the savior of the little guy who is a victim of flagrant corporate violators.
I agree. My point in short is that we seem to reflexively frame right and wrong on an axis defined by copyright, and somehow we’ve lost sight of the fact that the law itself is used much differently than we might otherwise want.
Technolibertarians confuse free market capitalism via copyright-enabled businesses as a viable strategy for individual freedom, and we find with time that only bastards win in a competition with loose rules and high stakes. Those concerned for the continued flourishing of human creativity in the face of LLMs confuse copyright as a means for small creators to have some ownership over their work, when it actually just seems to be a cudgel that can only be wielded by the wealthiest. Same losing fight, different flavor. I ask: why do we continue to allow “ownership of ideas” to underlie the moral basis of our conversations to begin with?
I think it's more that we see copyright as a necessary evil that can be used to defend our rights, but will be abused by the powerful, regardless.
To me, the biggest sin of cyberlibertarianism is the assumption that "cyberspace" is de facto another universe, separate from material reality, that doesn't need to be affected by the mundane and vulgar rules of "meatspace." John Barlow refers to "your governments" as if using a computer actually separates him from the state in some meaningful way, as if he has ascended beyond the flesh and now looks down upon the world as a being of pure Mind. But of course, "cyberspace" is just computers, servers, infrastructure using power and resources and thus is inextricably subject to government and systems of law. Zion was never an escape.
So yes, because cyberspace doesn't actually change the rules of the game, we have to play the game, crooked as it is, with the hand we're dealt. The legal pretense of ownership and copyright is all we have. If you want to abandon the idea of "ownership" altogether, then the wealthiest and most powerful still wind up controlling everything by virtue of their wealth and power. What do you suggest?
The whole thing just shows a huge lack of imagination, at least for something which is supposedly a 'founding document'. Barlow's "cyberspace" is for irrelevant shit like furry larping or talking about the latest Deep Space 9. Its not a place where you do banking (or even watch DS9).
> John Barlow refers to "your governments" as if using a computer actually separates him from the state in some meaningful way, as if he has ascended beyond the flesh and now looks down upon the world as a being of pure Mind. But of course, "cyberspace" is just computers, servers, infrastructure using power and resources and thus is inextricably subject to government and systems of law. Zion was never an escape.
I don't understand what you're trying to say here, is it that "cyberspace" couldn't exist as anything "real" because governments can just shut down servers? That's why you can't buy drugs and credit card numbers online anymore, right? Sarcasm aside, you seem to be using the fallibility of the current-popular physical layer to dismiss the otherwise separate tangible "space" that does seem to exist when lots of people can communicate fluidly with each other across vast distances. Or is your critique centered on the ability of "cyberspace" to go beyond just communication and serve as a space one can actually "live" in?
> The legal pretense of ownership and copyright is all we have. If you want to abandon the idea of "ownership" altogether, then the wealthiest and most powerful still wind up controlling everything by virtue of their wealth and power.
Limiting abandonment of "ownership" to only "copyright" and IP generally, what do you propose the wealthy would control that would allow them to replicate present circumstances in "cyberspace"? The best I can think of would be communications infrastructure, and they didn't build that by themselves (at least in the US) to begin with.
For example, why would TikTok continue to be usable as a brainrot generator & propaganda tool when content is necessarily separate from the algorithm and presentation layers? Current bastards exploit their centralized control based on this house of cards ownership structure. Nothing is practically stopping users from cloning the contents from the cdn and writing a new frontend besides legal threats. This is true of almost every tech business that exists, and many of them themselves exploited this asymmetry during their founding. They exist because billionaires use the legal system to scare individual upstarts from threatening their business model.
Mythos is good for cybersecurity simply because now executives can’t just tell people that only superhackers can break their stuff, as people wouldn’t believe them now anyways.
Infosec for decades has been 99% “hey I found some low-hanging fruit” only to get treated like a liability by the company you report it to, if you got acknowledgment at all. Because of Mythos though, now Artificial Superhumans can find these same vulns, and anyone could be running such an intelligence! Even better, the rich untouchable people operating this particular Artificial Superhuman can’t just be suppressed or ignored by the other set of rich untouchable people that have routinely not cared in the past. So long as it makes anthropic money, maybe we’ll actually see actual improvements in security!
I don't see that it makes much difference until we know the distribution of issues that Mythos finds and how reliably it discovers them? Vulns from inspection are discovered via a stochastic process of someone looking at the code, knowing about bug classes and paying sufficient attention to notice them. That's still the case.
IMHO the main thing thats interesting about AI assisted bug hunting is that it changes the balance of power from people who had a lot of free time & attention to the state and big business, who have money and frontier model access. It's a broadly "conservative" development in the sense that it distributes more power to groups who've already got it.
Waiting for the cyber "proxy wars" where state A equips deniable groups x, y with frontier access to undermine state B.
My point is less about Mythos specifically, more what it represents to the general public. “Mythos” has broken through and started gaining popular mindshare like “ChatGPT” did a few years ago. It now becomes hard to (falsely) claim that fixing basic flaws isn’t a priority, because now that everyone knows that it’s probably easier to hack stuff than it was in the past.
If you only rely only on stecurity through obscurity (eg attackers not having the source code) you gonna have a bad time. And even if your source code is not available, you can make a good guess about their dependencies. Find a vulnerability there and chances are your software is also vulnerable.
It’s good enough to find one known function from libc in programs memory to mount the attack. Moreover, there are automated methods how to leak pointers to functions, etc., without having access to the binary itself.
I do find it hard to tolerate the feeling of being watched online. The second-most trending dataset on huggingface right now is a snapshot of HN updating at a 5 minute interval. It makes me not want to really comment at all, just like how I don’t really publish any software I write anymore.
Turns out it sucks to produce original works when you know that, whereas previously a few people at best might see your work, now it’s a bunch of omniscient robots and maybe half of those original people are using the robots instead.
This is really interesting to me, because it never occurred to me to feel this way. Why would I care whether my comments are ending up in some dataset somewhere that's being used to train some model? My comments are boring and mostly uninformed. Have at it.
I'm curious: would you say the feeling of being watched online is making you afraid of some repercussion, or is it something else?
I get a feeling from overall anti-AI sentiment online that a lot of people feel they're entitled to 100% of value created by anything even tangentially related to their person, whether that's some intentional contribution or a random brain fart that happened in the vicinity of someone else doing something useful - and then become resentful they're not "getting their share".
There's hardly any other way to read all the proclamations of quitting to do anything because "cognitive dark forest" (itself a butchering of the original idea of "dark forest" across so many orthogonal dimensions in parallel, that it starts to look like a latent space of a transformer model).
Conversely, some people feel entitled to 100% of the value created by others. Oh, you wrote a book? Too bad, it's a part of my training data set now.
Downloading public stuff off the internet with no regard for the creator's wishes or license is bad enough, but we have many people here who defended AI companies seeding models with pirated content.
The internet is a social contract. AI is not the first thing to try and erode it for profit, but it's by far the most aggressive one.
Putting a book into a training data set does not take 100% of the value created by the author. You could make a convincing argument that since the LLM was never going to purchase the book, and the number of people who would have purchased the book but now won't because it's included in the training data is effectively zero, that no value was lost at all.
Licenses are legal documents and are usually treated as such, but "the creator's wishes" are irrelevant without case law, legislation, or licensing to back it up. And jurisdiction - show me a license that doesn't stand up in court in my home jurisdiction and I'll show you a license I won't care if I break or not.
Let's not forget the basis here: To promote the progress of science and the useful arts.
Everything else is window dressing. The fact that licenses even exist to conditionalize use goes against this grain and creates far too much overreach that spoils the spirit of the basis of copyright law.
> I get a feeling from overall anti-AI sentiment online that a lot of people feel they're entitled to 100% of value created by anything even tangentially related to their person
Rather, I don't like that the terms I released my work under aren't being respected. I believe LLMs are derivative works of the pieces they are trained on. I spent more than ten years working on open source code, and now the models that were trained on my GPL'd code are being used to make proprietary code against the terms of the license. I find this reprehensible.
While it wasn't an explicit term of release, generally I did not expect anyone to get any kind of financial value from the blog posts I wrote. I just wrote them for fun & maybe others would find them interesting. Now, LLMs have been trained on my blog posts and are generating financial value for some of the worst human beings on the planet who are using their money to murder, demean, and maim other humans.
I now know that blog posts I wrote for fun are putting money in some sociopath's bank account, and the GPL'd code I wrote is being used to create software to exploit me & other users. If I continue to create things publicly, it will be used against me and other people, and there's nothing I can do to stop it except to stop creating things. It's all very disrespectful & demoralizing.
> I believe LLMs are derivative works of the pieces they are trained on
That's your opinion with 0 legal backing. IMO, calling them derivative is untenable logically for anyone with some understanding of LLM/transformer architecture.
You desire a sharing community, but the takers/defectors are destroying that community.
Copyleft attempts to create a pool of code that forces sharing. But it broadly fails because you simply can't force antisocial people to be good sharers (plus source code usually isn't as valuable as we hope).
With any gifting/sharing, you have to accept that some of it will be abused. It is hard to filter for only community minded people who don't greedily abuse, and ideally who give freely.
I don't believe my circle of friends are becoming more selfish. I'm unsure what I would say about the rest of the world.
I am in exactly the same boat, down to the ~10 years. Only difference is I ended up picking AGPL for my later works. Like it made a difference...
The whole situation disgusts me.
- They expect me to pay for access to my own stolen code.
- Arguing stealing should be legal because China does it and if US companies don't, they'll be left behind.
- People like the poster you're replying to who argue you're not entitled to 100% of the value you create - completely ignoring that the value will go to some-one and that some-one is already much richer than any of us and getting richer faster while providing less value, if any. Honestly, this makes me wanna track these people down just to find out if they're also in the owner class and are just secretly laughing at us while pretending "we're all equal" or if they're workers who genuinely don't understand how much they're being exploited and how much worse it's gonna get.
- People don't give a fuck. Colleagues happily using "AI" because it "saves time", not realizing if this continues, we'll all be without jobs and the only way this was possible was by stealing from each other and most of us being OK with it.
Honestly, I am hoping for a revolution. A proper one, with guns if need be, but most importantly, where people get what they deserve in full.
Last time this happened was during the second industrial revolution, so many people got fucked so hard, entire countries turned to communism. That was a bad idea but we can do better. It's not (just) about how owns the means of production but who owns the product. Even if "AI" turns into actual AI, as long as it's built on top of our work, we should own it - that means both controlling it and getting paid proportionally to our contribution.
The currently rich people can negotiate what fraction they get paid if they show us they're providing value. Of course, only after we get back what they stole and unless they end up executed. The value of a human life is apparently $7.5M so anybody who steals more than that should logically get a death sentence.
But none of this will happen, people are too stupid and will get manipulated by a charismatic liar like every single time before.
Whoever can materialize it. That's how societies grow and thrive, how a civilization is built - people building things, and instead of capturing 100% of the value, creating a surplus for others to build on top.
It's not like any of us ever did anything completely new, isolated and unaffected by influences and contributions of those around us, and those who came before us. Trying to capture 100% of the value and getting up in arms about "freeloaders" is a deeply antisocial form of greed, and usually the thing people accuse companies of doing, claiming it's a hallmark of "late stage capitalism".
So you're saying that the most advantaged people (who control the most money to use for advertising and who can buy companies and their network effects at will) should get the most benefit?
> how a civilization is built
No, civilization is built by people who do actual work. Some of that work is services/research and building/maintaining stuff, some of that work is connecting supply and demand. The reward should go to the people doing the work according to how much work they do and their skill level.
> late stage capitalism
Nah, that's the idea that money should be able to create more money without any input of work. And before you say they made that money through work, no they didn't, they either inherited it or got into a position of power from which they can take a disproportionate cut.
I see this take a lot and I think it's harmful in ways you might not realize.
Even if it's true and you genuinely have nothing to hide, have nothing to lose from being profiled, there are people who absolutely do.
Look at the radicalization happening in countries around the world, including the USA. It might be OK to be part of a minority or to have an uncommon opinion. A few years pass and suddenly the same person is considered an undesirable, a foreign agent, a terrorist or a deviant.
I've posted a lot of shit online which can be connected to my person and which could label me as any of the groups above. But that's a decision I make for myself. I would never dare make it for others or claim that they should not care about surveillance and take the same risks I do.
I know a guy from russia who lost his job because he expressed an antiwar opinion. The same thing can happen in the US or and other country you consider civilized. The US proto-dictator is already sending death-threats to people who only expressed the opinion that soldiers can refuse illegal orders. Neither you or me can know what will happen next.
There’s definitely a fear of repercussions (I’ve been commenting on this site for over a decade now! Who knows what’s in my history...) but importantly I actually take some pride in many of the comments I write. What drew me to this site originally was how high quality everyone’s perspectives and articulation was, and I suppose I view the writing voice I’ve nurtured here as unique and special to me. It’s not about compensation, I’d just hate to see some future chatbot sound 1/1,000,000th like me I guess? Hard feeling to describe, but I’d rather just not be globbed in and instead express myself in ways that aren’t profitable or feasible to copy.
HN comments have always been public, I don't really understand this thought process. The robots also aren't going to care about some individual user, it'd be more of an agglomeration of everyone's comments.
This sounds like a nice principled stance, but you won't get any traffic with this approach. That's demotivating - to me blogging is a tight balance of exploration, learning, improving and feedback. I'm not able to write without considering how this impacts the reader - removing all readers breaks the process for me.
> And for people who successfully taken back their creative writing skills, how did you do it?
“AI is one possible reference for my actual writing”. Generate info and perspectives, but only ever write stuff yourself. Something about this for me forces me to stay in my own “”writing voice”, at least personally, for the various places I use AI tech in. I think of the tech as a chess engine; they are better than any human player but I use them to help me gain perspective rather than cheat. Otherwise, why bother playing chess?
Given electric cars are responsible for much bigger responsibilities than combustion cars (avoid driving into that bicyclist), there are new concerns here which beg extra consideration.
I actually think we should be asking more of safety regulations here with regards to the design of electric/computerized cars.
Think of it this way: every concern you have about a teenager having root on their electric car is the same as any sociopath hacker (AI enabled for modern nightmare fuel) who finds a root vulnerability and decides to not be a good person with it. If a teenager can mess with the collision avoidance, e.g. Israel can modify it to murder anyone who talks shit about Israel in the car. Or the CIA could turn it into a weapon. Or one day some dev could push a bad OTA update. Et cetera. Our safety regulations should mandate design features to prevent a malfunctioning computer from posing any greater safety risk than any other modified part in the car.
Up until v recently cars were not remotely accessible and part of a command-and-control network which Teslas are (perhaps other modern cars are too, I only know Tesla because I have one).
I know that the car reports practically all user events to Tesla in real time over the cell network (eg, open door), and I know it has root access. I don't know if that root is available remotely and I don't know if foundational commands like steering, acceleration and brake are accessible via the CLI (they are computer controlled actions locally)
THUS I would not want to drive a Tesla if there was the possibility of all cars being rooted and remotely controlled by an unauthorized actor.
No one should have nuclear weapons, we aught to have robust policy, institutions, and vigilance to prevent their proliferation and use.
Computerized vehicles aught to be strictly regulated in terms of how computers may affect the physical operation of the car, such that a reasonable standard of safety can be ensured outside the usual risk one takes when hopping in a motor vehicle. The fact that a hacker can possibly kill people by rooting an infotainment system is a symptom of the general disregard for security in design, and we continue to ignore it for engineering expediency.
You can’t really avoid paying for security, which seems to historically be why it is ignored and risked. I’ve always felt the right approach is for an internal security & reliability org be formed to provide an owner and maintainer for core services and libraries, so that things are built robustly from the get-go. Think premade formulations an integration for auth, hosting, data storage, etc. Some companies have small security teams that _kind of_ fill this role, but usually they’re a gate you must pass rather than an ally helping you navigate hard problems by providing and maintaining prebuilt solutions. I’d rather just require that normal devs not need to solve these problems and instead be provided an appropriate sandbox to deploy software in.
I’d be curious to know where you source your data from! Your project (neat idea btw) has me thinking about tracking this data for my own personal profile over time in some sort of dashboard, to see how Google’s opinion of me changes with my behavior online
The data comes straight from Google's Ad Center (myadcenter.google.com). Google shows you the interest categories and brands they've assigned to your profile. I automated scraping that page daily for each account during the experiment.
MirrorMask actually does exactly what you're describing. It scrapes your Ad Center profile before and after each session and shows you the diff. You can watch interests appear and disappear over time. The dashboard tracks your profile changes across sessions.
> Isn't it a stretch to round off "trans content" to "LGBT+ content"?
Not really. Do you think the people attempting to ban trans content are otherwise fine with kids being gay/lesbian/etc? Do you think they view gay/lesbian identities as legitimate, rather than unnatural perversion? It’s the same rhetoric in my experience, we’re all just deviants making choices. It seems like casual uninvested people just got used to gays being in the public eye and anti-gay people lost the ability to get anyone to care about that position. Turns out they’re just normal people trying to live their lives.
> Immigrants being lynched is certainly a subset of "anti-immigration", but it's still misleading
I don’t think your analogy works unless you believe that transgender people are uniquely extreme compared to other identities. If true, I think that more shows your prejudice than anything. Maybe if enough trans people end up in the public eye, casual uninvested people will stop thinking negatively about trans people generally too. Maybe one day they’re realize we’re just people trying to live our lives.
I really didn’t want this to succeed, could you imagine an alternative future where people are strapping these things to their faces and immersing their full FOV in a zuck-controlled virtual shopping mall? The Facebook brand is absolutely toxic imo, I think it’s an incredibly understated reason for this product’s failure. I’d love to develop for these devices though if I could somehow avoid interacting with Meta beyond as an OEM.
You are right IMO to question why North Dakota police were able to obtain this Tennessean woman in the first place, you’d think something like that should require far more sufficient evidence than facial recognition.
But, then what good is facial recognition for? Would it have been okay for this woman’s life to have been merely invaded because she matched a facial recognition system? Maybe they can just secretly watch you so you’re not consciously aware of being investigated? Should that be our new standard, if a computer thinks you look like a suspect you can be harassed by police in a state you’ve never even been in?
I just don’t see a legitimate way for AI to empower officers here without risking these new harms. That’s why I lean towards blaming the AI tech, rather than historically intractable problems like the reality of law enforcement.
Having a facial recognition match make you a suspect and cause the police to ask you some questions doesn't seem completely unreasonable to me. Investigations can certainly begin with weak forms of evidence (like an anonymous tip), you just require a higher standard of evidence for a search warrant, surveillance, or an arrest. A facial recognition match shouldn't be probable cause for an arrest warrant, but it still might be a useful starting point for a detective looking for actual evidence.
It is absolutely not reasonable to use low-quality photos to decide someone halfway across the country with no history of even leaving their local area is 'a suspect'.
Why does not the investigator have to supply some sort of evidence that she has a history of leaving their local area rather than putting the onus on the accused? This line of argument is halfway to "guilty until proven otherwise".
You and the GP that replied to me are way overstating what it means to be a "suspect". It just means the police are investigating you and consider it a possibility you've committed the crime. On its own, is not a sufficient status to search your home, subpoena your ISP, or arrest you - all of those things require a much higher burden of evidence, and oftena third party (judge's) approval. People routinely become "suspects" on much flimsier evidence than an unreliable software match - if I call in an anonymous tip that I saw you acting suspicious near the crime scene, you will probably become a suspect.
If you'd like, you can replace the term "suspect" in my post with "person of interest", which colloquially implies a lot less suspicion but isn't practically any different in terms of how the police interacts with you.
I feel like adversarial interoperability more than free market capitalism should have been the death knell for most of the negatives highlighted in this post. Everyone is still so determined to make money from mere ideas however that we still use 1700s law designed to protect book publishers to enable the existence of “businesses” so warped in valuation that they are now trillion dollar entities yet always face the existential threat of copy+paste. What if the more profound truth is that tech is beneficial to humanity but inherently worthless to sell, and that our present woe’s shape is determined by the antiquated institutions built service this illusion of value? In an inevitable future age of generative AI as an accessible technology, as opposed to a business model with a moat, what even is our goal for such institutions? What sorts of creativity do we want motivate, and what meaningful regulatory constraints even are there to begin with? I hope we figure it out soon, because IP will be impossible to enforce post-deglobalization in any case.
reply