Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reality is that we spend FAR more time reading code than writing it. That is why readability is far more important than clever, line saving constructs.

The key to further minimizing the mental load of reacquainting yourself with older existing code is to decide on a set of code patterns and then be fastidious in using them.

And then, if you want to want to be able to easily write a parser for your own code (without every detail in the spec), it's even more important.

And now that I have read TFA, I see he wrote:

> We have tooling that verify basic code style compliance.

His experience and dilligence has led him to the mountaintop, that being we must make ourselves mere cogs in a larger machine, self-limiting ourselves for the greater good of our future workload and production quality.



> The reality is that we spend FAR more time reading code than writing it. That is why readability is far more important than clever, line saving constructs.

In JS sometimes chain two or three inline-arrow-functions specifically for readability. When you read code, you often search for the needle of "the real thing" in a haystack of data formatting, API response prepping, localization, exception handling etc.

Sometimes those shorthand constructs help me to skip the not-so-relevant parts instead of mentally climbing down and up every sort and rename function.

That being said, I would not want this sentiment formalized in code guidelines :) And JS is not C except both have curly braces.


> That being said, I would not want this sentiment formalized in code guidelines :)

Surely. I'm all for code formatting standards as long as they're MY code formatting standards :-)

Ideally, I'd like the IDE to format the code to the user/programmer's style on open, but save the series of tokens to the code database in a formatting-agnostic fashion.

Then we could each have our own style but still have a consistent codebase.

And, I should add that my formatting conventions have gotten more extreme and persnickety over the years, and I now put spaces on both sides of my commas, because they're a separate token and are not a part of the expression on either side of it. I did this purely for readability, but I have NEVER seen anyone do that in all my decades on the internet reading code and working on large codebases. But I really like how spacing it out separates the expression information from the structural information.

It also helps me deal with my jettisoning code color formatting, as, as useful as I've found it in the past, I don't want to deal with having to import/set all that environmental stuff in new environments. So, I just use bland vi with no intelligence, pushing those UI bells and whistles out of it into my code formatting.

And, I fully endorse whatever it takes for you to deal with JS, as I have loathed it since it appeared on the scene, but that's just me being an old-school C guy.


> That is why readability is far more important than clever, line saving constructs.

Yes, I agree, that is why I am put off by some supposed C replacements that are trying to be clever with their abstractions or constructs.


Could you give an example of "clever" (bad) vs "simple" (good)?

In my experience C has a lot of simple grammar, a commonly-held simple (wrong) execution model, and a lot more complexity lurking underneath where it can't be so easily seen.

(One of my formative learning books was https://en.wikipedia.org/wiki/C_Traps_and_Pitfalls , valid in the 90s and mostly still valid today)


Simplicity is essential to achieving managable complexity over time.


Abstraction is necessary to handle scale. If you have painstakingly arrived at a working solution for a complex problem like say locking, you want to be able to package it up and use it throughout your codebase. C lacks mechanisms to do this apart from using its incredibly brittle macro facility.


Ada has built-in constructs for concurrency, with contracts, and there is formal verification in a subset of Ada named SPARK, so Ada / SPARK is pretty good.


> C lacks mechanisms to do this apart from using its incredibly brittle macro facility.

We programmers are the ultimate abstraction mechanism, and refining our techniques in pattern design and implementation in a codebase is our highest form of art. The list of patterns in the Gang-of-Four's "Design Patterns" are not as interesting as its first 50 pages, which are seminal.

From the organization of files in a project, to organization of projects, to class structure and use, to function design, to debug output, to variable naming as per scope, to commandline argument specification, to parsing, it's nothing but patterns upon patterns.

You're either doing patterns or you're doing one-offs, and one-offs are more brittle than C macros, are hard to comprehend later, and when you fix a bug in one, you've only fixed one bug, not an entire class of bugs.

Abstraction is the essense of programming, and abstraction is just pattern design and implementation in a codebase, the design of a functional block and how it's consumed over time.

The layering of abstractions is the most fundamental perspective on a codebase. They not only handle scale, they make or break correctness, ease of malleability, bug triage, performance, and comprehendability -- I'm sure I could find more.

The design of the layering of abstractions is the everything of a codebase.

The success of C's ability to let programmers create layers of abstractions is why C is the foundational language of the OS I'm using, as well as the browser I'm typing this message in. I'm guessing you are, too, and, while I could be wrong, it's not likely. And not a segfault in sight. The scale of Unix is unmatched.


> The success of C's ability to let programmers create layers of abstractions is why C is the foundational language of the OS I'm using, as well as the browser I'm typing this message in.

What browser are you using that has any appreciable amount of C in it? They all went C++ ages ago because it has much better abstraction and organization capabilities.


That's a fair point that I hadn't considered. I was developing C+objects as C++ was first being released in the mid-90s, and then using Borland's C++ compiler in the early 2000s, but never really thought about it as anything more than what its name implies: "C with some more abstractions on top of it".

Thank you for the correction, but I consider C++ to be just a set of abstractions built upon C, and, if you think about it, and none of those structures are separate from C, but merely overlaid upon it. I mean it is still just ints, floats, and pointers grouped using fancier abstractions. Yes, they're often nicer and much easier to use than what I had to do to write a GUI on top of extended DOS, but it's all just wrappers around C, IMO.


C++ is very definitely not just wrappers around C and it's pretty ridiculous to frame it like that. Or if you want to insist on that, then C doesn't exist, either, as it's just a few small abstractions over assembly.


> The success of C's ability to let programmers create layers of abstractions

You wrote several entirely valid paragraphs about how important abstractions are and then put this at the end, when C has been eclipsed by 40+ years of better abstractions.


Because programmers are creating the abstractions, not the programming language.

And there is no OS I'm aware of that will threaten Unix's dominance any time soon.

I'm not against it, but C's being so close to what microprocessors actually do seems to be story of of its success, now that I think about it.

I personally haven't written in C for more than a half-decade, preferring Python, but everything I do in Python could be done in C, with enough scaffolding. In fact, Python is written in C, which makes sense because C++ would introduce too many byproducts to the tightness required of it.

I was programming C using my own object structuring abstractions as C++ was being developed and released. It can be done, and done well (as evidenced by curl), but it just requires more care, which comes down to the abstractions we choose.

So, I would say "eclipsed" is a bit strong a sentiment, especially given our newly favorite programming langauges are running on OSes written in C.

If I had my druthers, I'd like everything to be F# with native compilation (i.e. not running using the .NET JIT), or OCaml with a more C-ish style of variable instantiation and no GC. But the impedance mismatch likely makes F# a poor choice for producing the kinds of precise abstractions needed for an OS, but that's just my opinion. Regardless, the code that runs runs via the microprocessor so the question really is, "What kinds of programming abstractions produce code that runs well on a microprocessor."

I've never thought of this before, thanks for the great question.


> And there is no OS I'm aware of that will threaten Unix's dominance any time soon.

Depends on the point of view, and what computing models we are talking about.

While iDevices and Android have UNIX like bottom layer, the userspace has nothing to do with UNIX, developed in a mix of Objective-C, Swift, Java, Kotlin and C++.

There is no UNIX per se on game consoles, and even on Orbit OS, there is little of it left.

The famous Arduino sketches are written in C++ not C.

Windows, dominant in games industry to the point Valve failed to attract developers to write GNU/Linux games, and had to come up with Proton instead, it is not UNIX, the old style Win32 C code has been practically frozen since Windows XP, with very few additions, as since Windows Vista it became heavily based on C++ and .NET code.

macOS while being UNIX certified, the userspace that Apple cares about, or NeXT before the acquisition, has very little to do with UNIX and C, rather Objective-C, C++ and Swift.

On the cloud native space, with managed runtimes on application containers or serverless, the exact nature of the underlying kernel or type 1 hypervisor is mostly irrelevant for application developers.


> I'd like everything to be F# with native compilation

This already works today (even with GUI applications) - just define non-unbound-reflection using replacements for printfn (2 LOC) and you're good to go: dotnet publish /p:PublishAot=true

To be clear, in .NET, both JIT runtime and ILC (IL AOT Compiler) drive the same back-end. The compiler itself is called RyuJIT but it really serves all kinds of scenarios today.

> makes F# a poor choice for producing the kinds of precise abstractions needed for an OS

You can do this in F# since it has access to all the same attributes for fine-grained memory layout and marshalling control C# does, but the experience of using C# for this is better (it is also, in general, better than using C). There are a couple areas where F# is less convenient to use than C# - it lacks C#'s lifetime analysis for refs and ref structs and its pattern matching does not work on spans and, again, is problematic with ref structs.


> there is no OS I'm aware of that will threaten Unix's dominance any time soon

True, but irrelevant?

> What kinds of programming abstractions produce code that runs well on a microprocessor

.. securely. Yes, this can be done in C-with-proofs (sel4), but the cost is rather high.

To a certain extent microprocessors have co-evolved with C because of the need to run the same code that already exists. And existing systems force new work to be done with C linkage. But the ongoing CVE pressure is never going to go away.


I'm not at all against a new model providing a more solid foundation for a new OS, but it's not going to be garbage collected, so the most popular of the newer languages make the pickings slim indeed.

> But the ongoing CVE pressure is never going to go away.

I think there are other ways to deflect or defeat that pressure, but I have no proof or work in that direction, so I really have nothing but admittedly wild ideas.

However, one potentially promising possibility in that direction is the dawn of immutable kernels, but once again, that's just an intuition on my part, and they can likely be eventually defeated, if only by weaknesses in the underlying hardware architecture, even though newer techniques such as timing attacks should be more easily detected because they rely on being massively brute force.

The question, to me, is "Can whittling away at the inherent weaknesses reduce the vulns to a level of practical invulnerability?" I'm not hopeful that that can occur but seeing the amount of work a complete reimplementation would require, it may simply be the best approach to choose from a cost-benefit analysis perspective where having far fewer bugs and vulns is more feasible than guaranteed perfection. And, once again, such perfection would require the hardware architecture be co-developed with the OS and its language to really create a bulletproof system, IMO.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: