The old internet was a more homogenous society, social outcasts and technically capable people who liked interacting with computers. The content was more relatable because it was created by similar types of people. Now the internet is for everyone so the content is for everybody.
It's too easy to blame the algorithms when the algorithms are a necessary evil. TikTok has millions of videos uploaded per day. You are not going to sort through all of those on your own. The algorithm is designed to show you more of what you interact with. If you're not finding joy in what you're seeing, it's because you're not interacting with content that gives you joy. Stop watching the slop, search for the things you like and follow good creators. There are a lot of them out there, depending on what you like. That applies to any social media, not just TikTok.
> It's too easy to blame the algorithms when the algorithms are a necessary evil. TikTok has millions of videos uploaded per day. You are not going to sort through all of those on your own.
I don't necessarily think that people have an issue with the algorithms themselves, more so that all of the platforms that implement them will manipulate and alter it so that you constantly stay engaged. And that boils down to pushing ragebait, low effort clickbait, and shock content over everything else.
Now it is possible to avoid falling into this, but its not the default. If I have to actively fight to not see people dying, asinine political and cultural takes, or ai slop, then its a bad experience and I will yearn for the days when gaming let's plays and video essays were the default. Its easy to say "just don't watch it", but is it really "just" that easy when the whole platform is constantly being tweaked and optimized against the content that someone would prefer to see?
I think that's fair, the algorithms are manipulative and one has to be very aware of their own susceptibility to it. Everything you specifically mentioned is why I don't go on Twitter to scroll anymore. I will use the platform to search something or I will click links to it but Twitter is not my go-to for scrolling dopamine because it is too negative.
The branch name is "claude/phase-a-port", there was zero indication this was an experiment until Jarred commented. The more accurate title might have simply been "there is a branch in the official repo of bun describing a port to rust from zig". No amount of soft titles would have prevented the discussion. People have their opinions about Bun, about Zig, about Rust and it's all going to come out in a discussion board.
Can’t every branch be considered an experiment? I have a ton of experimental branches that I don’t label «experimental». One of the reasons you use git…
Sure, but then how does it change anything around the discussion? You are still running an experiment to port to Rust, it still gets posted, the Rust-heads and Zig-heads still make their comments.
> there was zero indication this was an experiment
The goal of Phase A is a **draft** `.rs` next to the `.zig`
that captures the logic faithfully — it does **not** need to compile. Phase B
makes it compile crate-by-crate.
I mean, it would be hard to spell it out any clearer than that! Code that fails to compile is just not very useful for real work.
Phase B clearly says compilation is the next goal. The first goal is to get a like for like logic, the second goal is to get it to compile. Can you guess what the third goal will be? Throw out the code?
Yes, but that would require people to read past the title. You can't get a proper knee-jerk first post in if you do that! Completely unfair to expect people to make that sacrifice/effort.
[there was some sarcasm there, BTW, if anyone has a faulty detector that didn't pick up on it]
I couldn't use that title because I didn't know if it an experiment at the moment. Even now the correct title would be "Bun author says that he is entertaining the idea of porting it from Zig to Rust, creates an experimental branch".
This entire article is basically saying "What are we doing? What's going on?" and I could not agree more. My own experience with coding agents has been FOMO cause if I don't have fifteen claude tabs running with OpenClaw, I'm not going to make it. I much prefer keeping myself in the loop and being active with the process than handing it off to deus ex machina and seeing the eventual results that may be what I like and maybe not what I like.
I do like the tips on how to work with agents for delegation. Let it do boring things. The deterministic things where you know what the result should look like each time.
I recently turned to list making for offloading all the mental tasks and organizing my life better. Running low one ggs? "Hey siri, add eggs to my groceries list". Random thought I want to google? "Hey Siri, remind me later to look up XYZ topic". I've even setup a few iOS shortcuts that connect into my Obsidian notes so that I can quickly dictate notes about books I'm reading or ideas I want to capture for later writing.
I don't know if it makes me sharper but I am able to remain focused on the present and offload the thought to future me. This has been enormously helpful and makes me wonder why I never did it regularly beyond grocery lists. Even those lists would be a mad scramble of "what do I need" looking around and almost always forgetting something I need.
The prices on Ali Express for e-ink are not that bad, but certainly can't get anything as big as the Mira Pro. The Boox premium is plug and play compatibility, high fidelity/refresh rate and support.
GPT is impressive with a consistent 0% false positive rate across models, yet its ability to detect is as high as 18%. Meanwhile Claude Opus 4.6 is able to detect up to 46% of backdoors, but has a 22% false positive rate.
It would be interesting to have an experiment where these models are able to test exploiting but their alignment may not allow that to happen. Perhaps combining models together can lead to that kind of testing. The better models will identify, write up "how to verify" tests and the "misaligned" models will actually carry out the testing and report back to the better models.
Rerun it for "high" and "xhigh" effort settings, and GPT-5.2-Codex still get 0% false positive, while getting at the level of other best models for localization of backdoors: https://quesma.com/benchmarks/binaryaudit/
The funny thing about the friends feed is that it highlights for me who is extremely active on the platform. People resharing stuff all the time. And, it's one of the few feeds you can't endlessly scroll through. It will tell you to "check back later" once you get to 3-4 days of updates. No money in showing people their friends feeds, so why let them endlessly scroll.
I think your experiment was valid, even if anecdotal. This article from January 2009 was talking about the phenomena of what it actually meant to have friends on facebook. Are you a "loser" or a "social slut"? This was at least a few years before most of the algorithms that we perceive as dangerous and enshittifying became core to the platform. The specific study they referenced (new link below) argued that there is genetic components in how we perceive our social networks.
Where FB and Instagram are to blame is not just being aware of the psychological impact but amplifying it make it worse, especially onto a teen audience that has no capability of distinguishing the real world from social media. To them, it's the exact same. Your online social circle may be all you have in real life, not to mention the cyber bullying, unrealistic body standards and all the other awful parts that come when you gamify and reward capturing people's attention.
I won't deny that individuals are also responsible to guard themselves and especially parents, but these platforms have been accused (and are currently in US court) over the fact that they knew about the addictive potential of their platforms and made no safeguards over improving that. As a platform owner, you are responsible for all aspects of its success and failures, its highs and lows.
It's too easy to blame the algorithms when the algorithms are a necessary evil. TikTok has millions of videos uploaded per day. You are not going to sort through all of those on your own. The algorithm is designed to show you more of what you interact with. If you're not finding joy in what you're seeing, it's because you're not interacting with content that gives you joy. Stop watching the slop, search for the things you like and follow good creators. There are a lot of them out there, depending on what you like. That applies to any social media, not just TikTok.
reply