I remembered reading about this news back when that first message was posted on the mailing list, and didn't think much of it then (rust has been worming its way into a lot of places over the past few years, just one more thing I tack on for some automation)...
But seeing the maintainer works for Canonical, it seems like the tail (Ubuntu) keeps trying to wag the dog (Debian ecosystem) without much regard for the wider non-Ubuntu community.
I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers", but instead positioned only on the merits of the change.
As an end user, it doesn't concern me too much, but someone choosing to add a new dependency chain to critical software plumbing does, at least slightly, if not done for very good reason.
Agreed. I think that announcement was unprofessional.
This was a unilateral decision affecting other's hard work, and the author didn't provide them the opportunity to provide feedback on the change.
It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious.
This is breaking support for multiple ports to rewrite some feature for a tiny security benefit. And doing so on an unacceptably short timeline. Introducing breakage like this is unacceptable.
There's no clear cost-benefit analysis done for this change. Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for, and actually put the cart before the horse.
I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.
> I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.
Thanks for this.
I know intellectually, that there are sane/pragmatic people who appreciate Rust.
But often the vibe I’ve gotten is the evangelism, the clear “I’ve found a tribe to be part of and it makes me feel special”.
So it helps when the reasonable signal breaks through the noisy minority.
>I know intellectually, that there are sane/pragmatic people who appreciate Rust.
For the most part that is almost everyone who works on rust and writes rust. The whole coreutils saga was pretty much entirely caused by Canonical, The coreutils rewrite project was originally a hobby project iirc and NOT ready for prod.
for the most part the coreutils rewrite is going well all things considered, bugs are fixed quickly and performance will probably exceed the original implementation in some cases since concurrency is a cake-walk.
The whole re-write it in rust largely stemmed from the idea that if you have a program in C and a program in Rust then the program in rust is "automatically" better which is often the case. The exception is very large battle tested projects with custom tooling in place to ensure the issues that make C/C++ a nightmare are somewhat reduced. Rust ships with the borrow checker by default meaning logically its like for like.
In the real world it is not always the case there are still plenty of opportunity for straight up logic bugs and crashes (See cloudflare saga) that are completely just due to bad programming practices.
Rust is the nail and the hammer, but you can still hit your finger if you don't know how to swing it properly
FYI for the purpose of disclosing bias I am one of the few "rust first" developers. I learned the language in 2021, and it was the first "real" programming language I learned how to use effectively. Any attempts I have had to dive into other languages have been short lived and incredibly frustrating because rust is a first-class experience in how to make a systems programming language
It really makes me upset that we are throwing away decades of battle tested code just because some people are excited about the language du jour. Between the systemd folks and the rust folks, it may be time for me to move to *BSD instead of Linux. Unfortunately, I'm very tied to Docker.
That “battle-tested code” is often still an enduring and ongoing source of bugs. Maintainers have to deal with the burden of working in a 20+ year-old code base with design and architecture choices that probably weren’t even a great idea back then.
Very few people are forcing “rewrite in rust” down anyone’s throats. Sometimes it’s the maintainers themselves who are trying to be forward-thinking and undertake a rewrite (e.g., fish shell), sometimes people are taking existing projects and porting them just to scratch an itch and it’s others’ decisions to start shipping it (e.g., coreutils). I genuinely fail to see the problem with either approach.
C’s long reign is coming to an end. Some projects and tools are going to want to be ahead of the curve, some projects are going to be behind the curve. There is no perfect rate at which this happens, but “it’s battle-tested” is not a reason to keep a project on C indefinitely. If you don’t think {pet project you care about} should be in C in 50 years, there will be a moment where people rewrite it. It will be immature and not as feature-complete right out the gate. There will be new bugs. Maybe it happens today, maybe it’s 40 years from now. But the “it’s battle tested, what’s the rush” argument can and will be used reflexively against both of those timelines.
It is basic knowledge that memory safety bugs are a significant source of vulnerabilities, and by now it well-established that the first developer who can avoid C without introducing memory safety bugs hasn't been born yet. In other words: if you care about security at all, continuing with the status quo isn't an option.
The C ecosystem has tried to solve the problem with a variety of additional tooling. This has helped a bit, but didn't solve the underlying problem. The C community has demonstrated that it is both unwilling and unable to evolve C into a memory-safe language. This means that writing additional C code is a Really Bad Idea.
Software has to be maintained. Decade-old battle-tested codebases aren't static: they will inevitably require changes, and making changes means writing additional code. This means that your battle-tested C codebase will inevitably see changes, which means it will inevitably see the introduction of new memory safety bugs.
Google's position is that we should simply stop writing new code in C: you avoid the high cost and real risk of a rewrite, and you also stop the neverending flow memory safety bugs. This approach works well for large and modular projects, but doing the same in coreutils is a completely different story.
Replacing battle-tested code with fresh code has genuine risks, there's no way around that. The real question is: are we willing to accept those short-term risks for long-term benefits?
And mind you, none of this is Rust-specific. If your application doesn't need the benefits of C, rewriting it in Python or Typescript or C# might make even more sense than rewriting it in Rust. The main argument isn't "Rust is good", but "C is terrible".
systemd has been the de facto standard for over a decade now and is very stable. I have found that even most people who complained about the initial transition are very welcoming of its benefits now.
Depends a bit on how you define systemd. Just found out that the systemd developers don't understand DNS (or IPv6). Interesting problems result from that.
I agree with everything you've said here, except that the reality of speaking with a "rust first" developer is making me feel suddenly ancient. But that aside, the memory safety parts are a huge benefit, but far from the only one. Option and Result types are delightful. Exhaustive matching expressions that won't compile if you add a new variant that's not handled are huge. Types that make it impossible to accidentally pass a PngImage into a function expecting a str, even though they might both be defined as contiguous series of bytes down deep, makes lots of bugs impossible. A compiler that gives you freaking amazing error messages that tell you exactly what you did wrong and how you can fix it sets the standard, from my experience. And things like "cargo clippy" which tell you how you could improve your code, even if it's already working, to make it more efficient or more idiomatic, are icing on the cake.
People so often get hung up on Rust's memory safety features, and dismiss is as through that's all it brings to the table. Far from it! Even if Rust were unsafe by default, I'd still rather use it that, say, C or C++ to develop large, robust apps because it has a long list of features that make it easy to write correct code, and really freaking challenging to write blatantly incorrect code.
Frankly, I envy you, except that I don't envy what it's going to be like when you have to hack on a non-Rust code base that lacks a lot of these features. "What do you mean, int overflow. Those are both constants! How come it didn't let me know I couldn't add them together?"
Much of the drive to rewrite software in Rust is a reaction to the decades-long dependence on C and C++. Many people out there sit in the burning room like the dog in that meme, saying "this is fine". Most of them don't have to deal at all directly with the consequences involved.
Rust is the first language for a long time with a chance at improving this situation. A lot of the pushback against evangelism is from people who simply want to keep the status quo, because it's what they know. They have no concept of the systemic consequences.
I'd rather see over-the-top evangelism than the lack of it, because the latter implies that things aren't going to change very fast.
On the other hand, the presence of an alternative is the persuasion.
It's very easy to justify for yourself why you aren't addressing the hard problems in your codebase. Combine that with a captive audience, and you end up with everyone running the same steaming heap of technical debt and being unhappy about it.
But the second an alternative starts to get off the ground there's suddenly a reason to address those big issues: people are leaving, and it is clear that complacency is no longer an option. Either evolve, or accept that you'll perish.
That was probably a mischaracterization on my part. I wouldn't consider rewriting almost everything useful that's currently in C or C++ to be over the top. That would be a net good.
Posts that say "I rewrote X in Rust!" shouldn't actually be controversial. Every time you see one, you should think to yourself wow, the software world is moving towards being more stable and reliable, that's great!
But it is nonsense. Every time some rewrote something (in Rust or anything else), I instead worry about what breaks again, what important feature is lost for the next decade, how much working knowledge is lost, what muscle memory is now useless, what documentation is outdated, etc.
I also doubt Rust brings as many advantages in terms of stability that people claim. The C code I rely on in my daily work basically never fails (e.g. I can't remember "vim" ever crashing on me in the last 30 years I use it). That this is all rotten code C that needs to be written is just nonsense. IMHO it would far more useful to invest in proper maintenance and incremental improvements.
If you were right, then people should not be using Rust or C/C++. They should be using SPARK/Ada. The SPARK programming language, a subset of Ada, was used for the development of safety-critical software in the Eurofighter Typhoon, a British and European fighter jet. The software for mission computers and other systems was developed by BAE Systems using the GNAT Pro environment from AdaCore, which supports both Ada and SPARK. It's not just choosing the PL, but the whole environment including the managers.
Nvidia evaluated Rust and then chose SPARK/Ada for root of trust for GPU market segmentation licensing, which protects 50% profit margin and $4T market cap.
Sometimes good things are ruined by people around. I think Rust is fine, although I doubt its constraints are universally true and sensible in all scenarios.
> It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious.
The problem is that those ports aren't supported and see basically zero use. Without continuous maintainer effort to keep software running on those platforms, subtle platform-specific bugs will creep in. Sometimes it's the application's fault, but just as often the blame will lie with the port itself.
The side-effect of ports being unsupported is that build failures or test failures - if they are even run at all - aren't considered blockers. Eventually their failure becomes normal, so their status will just be disregarded as noise: you can't rely on them to pass when your PR is bug-free, so you can't rely on their failure to indicate a genuine issue.
> Instead of just "builds with gcc" we would need to wait for Rust support.
There's always rustc_codegen_gcc (gcc backend for rustc) and gccrs (Rust frontend for gcc). They are't quite production-ready yet, but there's a decent chance it's good enough for the handful of hobbyists wanting to run the latest applications on historical hardware.
As to adding new architectures: it just shifts the task from "write gcc backend" to "write llvm backend". I doubt it'll make much of a difference in practice.
> to rewrite some feature for a tiny security benefit
For what it's worth, the zero->one introduction of a new language into a big codebase always comes with a lot of build changes, downstream impact, debate, etc. It's good for that first feature to be some relatively trivial thing, so that it doesn't make the changes any bigger than they have to be, and so that it can be delayed or reverted as needed without causing extra trouble. Once everything lands, then you can add whatever bigger features you like without disrupting things.
> This is breaking support for multiple ports to rewrite some feature for a tiny security benefit. And doing so on an unacceptably short timeline. Introducing breakage like this is unacceptable.
I’m
Normally I’d agree, but the ports in question are really quite old and obscure. I don’t think anything would have changed with an even longer timeline.
I think the best move would have been to announce deprecation of those ports separately. As it was announced, people who will never be impacted by their deprecation are upset because the deprecation was tied to something else (Rust) that is a hot topic.
If the deprecation of those ports was announced separately I doubt it would have even been news. Instead we’ve got this situation where people are angry that Rust took something away from someone.
Those ports were never official, and so aren't being deprecated. Nothing changes about Debian's support policies with this change.
EDIT: okay so I was slightly too strong: some of them were official as of 2011, but haven't been since then. The main point that this isn't deprecating any supported ports is still accurate.
*It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious."
Imo this is true for going from one to a handful, but less true when going from a handful to more. Afaict there are 6 official ports and 12 unofficial ports (from https://www.debian.org/ports/).
It really comes down to which architectures you're porting to. The two biggest issues are big endian vs little endian, and memory consistency models. Little endian is the clear winner for actively-developed architectures, but there are still plenty of vintage big endian architectures to target, and it looks like IBM mainframes at least are still exclusively big endian.
For memory consistency, Alpha historically had value as the weakest and most likely to expose bugs. But nobody really wants to implement hardware like that anymore, almost everything falls somewhere on the spectrum of behavior bounded by x86 (strict) and Arm (weaker), and newer languages (eg. C++ 11) mean newer code can be explicit about its expectations rather than ambiguous or implicit.
Command line utilities often handle not-fully-trusted data, and are often called from something besides an interactive terminal.
Take for example git: do you fully trust the content of every repository you clone? Sure, you'll of course compile and run it in a container, but how prepared are you for the possibility of the clone process itself resulting in arbitrary code execution?
The same applies to the other side of the git interaction: if you're hosting a git forge, it is basically a certainty that whatever application you use will call out to git behind the scenes. Your git forge is connected to the internet, so anyone can send data to it, so git will be processing attacker-controlled data.
There are dozens of similar scenarios involving tools like ffmpeg, gzip, wget, or imagemagick. The main power of command line utilities is their composability: you can't assume it'll only ever be used in isolation with trusted data!
Some people might complain about the startup cost of a language like Java, though: there are plenty of scripts around which are calling command-line utilities in a very tight loop. Not every memory-safe language is suitable for every command-line utility.
I totally agree. In reality, today, if you want to produce auditable high-integrity, high-assurance, mission-critical software, you should be looking at SPARK/Ada and even F* (fstar). SPARK has legacy real world apps and a great eco system for this type of sofware. F* is being used on embedded and in other realworld apps where formal verification is necessary or highly advantageous. Whether I like Rust or not, should not be the defining factor. AdaCore has a verifed Rust compiler, but the tooling around it does not compare to that around SPARK/Ada. I've heard younger people complain about PLs being verbose, boring, or not their thing, and unless you're a diehard SPARK/Ada person, you probably feel that way about it too. But sometimes the tool doesn't have to be sexy or the latest thing to be the right thing to use. Name one Rust realworld app older than 5 years that is in this category.
> Name one Rust realworld app older than 5 years that is in this category.
Your "older than 5 years" requirement isn't really fair, is it? Rust itself had its first stable release barely 10 years ago, and mainstream adoption has only started happening in the last 5 years. You'll have trouble finding any "real-world" Rust apps older than 5 years!
As to your actual question: The users of Ferrocene[0] would be a good start. It's Rust but certified for ISO 26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 - clearly someone is interested in writing mission-critical software in Rust!
The point was how would you justify choosing Rust based on any real world proof. Maybe it will be ready in a few years, but even then it is far from achieving what you already have in SPARK along with proven legacy. I am very familiar with this, and I still chose SPARK/Ada instead of Rust. SPARK is already certified for all of this. And aerospace, railway, and other high-integrity app industries are already familiar with the output of the SPARK tools, so there's less friction and time in auditing them for certification. Aside from AdaCore, who collaborated with Ferrocene, to get a compiler certified I don't see much traction to change our decision. We are creating show control software for cyber-physical systems with potential dire consequences, so we did a very in-depth study Q1 2025, and Rust came up short.
> and the author didn't provide them the opportunity to provide feedback on the change.
this is wrong, the author wrote a mail about _intended_ changes _1/2 year_ before shipping them on the right Debian mailing list. That is _exactly_ how giving people an opportunity to give feedback before doing a change works...
Sure, they made it clear they don't want any discussions to be side tracked about a topics about thing Debian doesn't official support. That is not nice, but understandable, I have seen way too much time wasted on discussions being derailed.
The only problem here is people overthinking things and/or having issues with very direct language IMHO.
> This is breaking support for multiple ports to rewrite some feature for a tiny security benefit
It's not braking anything supported.
The only thing breaking are unsupported. And are only niche used too.
Nearly all projects have very limited capacities and have to draw boundaries, and the most basic boundary is unsupported means unsupported. This doesn't mean you don't keep unsupported use cases in mind/avoid accidentally breaking them, but it means they don't majorly influence your decision.
> And doing so on an unacceptably short timeline
1/2 a year for a change which only breaks unsupported things isn't "unacceptably short", it's actually pretty long. If this weren't OSS you could be happy about one month and most likely less. People complain about how little resources OSS projects have, but the scary truth is most commercial projects have even less resource and must ship at a dead line. Hence why it's very common for them to be far worse when it comes to code quality, technical dept, not correctly handled niche error cases etc.
> to every architecture they release for
Rust toolchain has support for every architecture _they_ release for,
it breaks architectures niche unofficial 3rd party projects support.
Which is sad, sure, but unsupported is in the end unsupported.
> cost-benefit analysis done for this change.
Who says it wasn't done at all. People have done so over and over on the internet for all kind of Linux distributions. But either way, you wouldn't include that in a mail announcing an intend for change (as you don't want discussions to be side tracked). Also benefits are pretty clear:
- using Sequoia for PGP seems to be the main driving force behind this decision, this projects exists because of repeating running into issues (including security issues) with the existing PGP tooling. It happens to use rust, but if there where no rust it still would exist. Just using a different language.
- some file format parsing is in a pretty bad state to a point you most likely will rewrite it to fix it/make it robust. When anyway doing so it using rust if preferable.
- and long term: Due to the clear, proven(1), benefits of using rust for _new_ project/code increasingly more use it, by not "allowing" rust to be required Debian bars itself form using any such project (like e.g. Sequoia which seems to be the main driver behind this change)
> this "rewrite it in rust" evangilism
which isn't part of this discussion at all,
the main driving part seems to be to use Sequoia, not because Sequoia is in rust but because Sequoia is very well made and well tested.
Similar Sequoia isn't a "lets re-write everything in rust project" but a "state of PGP tooling" is so painful for certain use cases (not all) in ways you can't fix by trying to contribute upstream that some people needed a new tooling, and rust happened to be the choice for implementing that.
> Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for, and actually put the cart before the horse.
They already have a Rust toolchain for every system Debian releases for.
The only architectures they're arguing about are non-official Debian ports for "Alpha (alpha), Motorola 680x0 (m68k), PA-RISC (hppa), and SuperH (sh4)", two of which are so obscure I've never even heard of them and one of the others most famous for powering retro video game systems like Sega Genesis.
> I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.
This right here.
As a side-note, I was reading one of Cloudflare's docs on how it implemented its firewall rules, and it's so utterly disappointing how the document stops being informative suddenly start to reads like a parody of the whole cargo cult around Rust. Rust this, Rust that, and I was there trying to read up on how Cloudflare actually supports firewall rules. The way they focus on a specific and frankly irrelevant implementation detail conveys the idea things are ran by amateurs that are charmed by a shiny toy.
> I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers", but instead positioned only on the merits of the change.
The wording could have been better, but I don’t see it as a dig. When you look at the platforms that would be left behind they’re really, really old.
It’s unfortunate that it would be the end of the road for them, but holding up progress for everyone to retain support for some very old platforms would be the definition of the tail wagging the dog. Any project that starts holding up progress to retain support for some very old platforms would be making a mistake.
It might have been better to leave out any mention of the old platforms in the Rust announcement and wait for someone to mention it in another post. As it was written, it became an unfortunate focal point of the announcement despite having such a small impact that it shouldn’t be a factor holding up progress.
Not just really, really old, but they in fact have long since been depreciated in any semblance of official support.
I get the friction especially for younger contributors, not that this is the case here. However there are architectures that havent even received a revision in their lifetime which old heads will take as personal slights for which heads must roll when presented with even the slightest of inconvenience for their hobbyist port.
I haven't seen any complaints from anyone who uses those ports personally. I would bet there's someone out there who uses Debian on those platforms, but 100% of the complaining I've seen online has been from people who don't use those ports.
It's the idea that's causing the backlash, not the impact.
> The wording could have been better, but I don’t see it as a dig.
He created (or at least re-activated) a dichotomy for zero gain, and he vastly increased the expectations for what a Rust rewrite can achieve. That is very, very bad in a software project.
The evidence for both is in your next paragraph. You immediately riff on his dichotomy:
> It’s unfortunate that it would be the end of the road for them, but holding up progress for everyone to retain support for some very old platforms would be the definition of the tail wagging the dog.
(My emphasis.)
He wants to do a rewrite in Rust to replace old, craggy C++ that is so difficult to reason about that there's no chance of attracting new developers to the maintenance team with it. Porting to Rust therefore a) addresses memory safety, b) gives a chance to attract new developers to a core part of Debian, and c) gives the current maintainer a way to eventually leave gracefully in the future. I think he even made some these points here on HN. Anyone who isn't a sociopath sympathizes with these points. More importantly, accidentally introducing some big, ugly bug in Rust apt isn't at odds with these goals. It's almost an expected part of the growing pains of a rewrite plus onboarding new devs.
Compare that to "holding up progress for everyone." Just reading that phrase makes me force sensitive like a Jedi: I can feel the spite of dozens HN'ers tingling at that and other phrases in these HN comments as they sharpen their hatred, ready to pounce at the Rust Evangelists the moment this project hits a snag. (And, like any project, it will hit snags.)
1. "I'm holding on for dear life here, I need help from others and this is the way I plan to get that help"
2. "Don't hold back everyone else's progress, please"
The kind of people who hear "key party" and imagine clothed adults reciting GPG fingerprints need to comprehend that #1 and #2 are a) completely different strings and b) have very different-- let's just say magical-- effects on the behavior of even small groups of humans.
> As an end user, it doesn't concern me too much ...
It doesn't concern me neither, but there's some attitude here that makes me uneasy.
This could have been managed better. I see a similar change in the future that could affect me, and there will be precedent. Canonical paying Devs and all, it isn't a great way of influencing a community.
I agree. It's sad to see maintainers take a "my way or the highway" approach to package maintenance, but this attitude has gradually become more accepted in Debian over the years. I've seen this play before, with different actors: gcc maintainers (regarding cross-bootstrapping ports), udev (regarding device naming, I think?), systemd (regarding systemd), and now with apt. Not all of them involved Canonical employees, and sometimes the Canonical employees were the voice of reason (e.g. that's how I remember Steve Langasek).
I'm sure some will point out that each example above was just an isolated incident, but I perceive a growing pattern of incidents. There was a time when Debian proudly called itself "The Universal Operating System", but I think that hasn't been true for a while now.
> It's sad to see maintainers take a "my way or the highway" approach to package maintenance, but this attitude has gradually become more accepted in Debian over the years.
It's frankly the only way to maintain a distribution relying almost completely on volunteer work! The more different options there are, the more expensive (both in terms of human cost, engineering time and hardware cost) testing gets.
It's one thing if you're, say, Red Hat with a serious amount of commercial customers, they can and do pay for conformance testing and all the options. But for a fully FOSS project like Debian, eventually it becomes unmaintainable.
Additionally, the more "liberty" distributions take in how the system is set up, the more work software developers have to put in. Just look at autotools, an abomination that is sadly necessary.
> Canonical paying Devs and all, it isn't a great way of influencing a community.
That's kind of the point of modern open source organizations. Let corporations fund the projects, and in exchange they get a say in terms of direction, and hopefully everything works out. The bigger issue with Ubuntu is that they lack vision, and when they ram things through, they give up at the slightest hint of opposition (and waste a tremendous amount of resources and time along the way). For example Mir and Unity were perfectly fine technologies but they retired it because they didn't want to see things through. For such a successful company, it's surprising that there technical direction setting is so unserious.
> I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers"
Yes, and more generally, as far as I am concerned, the antagonizing tone of the message, which is probably partly responsible for this micro-drama, is typical of some Rust zealots who never miss an occasion to remind C/C++ that they are dinosaurs (in their eyes). When you promote your thing by belittling others, you are doing it wrong.
There are many high profile DDs who work or have worked for Canonical who are emphatically not the inverse — Canonical employees who are part of the Debian org.
The conclusion you drew is perfectly reasonable but I’m not sure it is correct, especially when in comparison Canonical is the newcomer. It could even be seen to impugn their integrity.
If you look at the article, it seems like the hard dependency on Rust is being added for parsing functionality that only Canonical uses:
> David Kalnischkies, who is also a major contributor to APT, suggested that if the goal is to reduce bugs, it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats that Klode mentioned from APT entirely. It is only needed for two tools, apt-ftparchive and apt-extracttemplates, he said, and the only ""serious usage"" of apt-ftparchive was by Klode's employer, Canonical, for its Launchpad software-collaboration platform. If those were taken out of the main APT code base, then it would not matter whether they were written in Rust, Python, or another language, since the tools are not directly necessary for any given port.
Mmm, apt-ftparchive is pretty useful for cooking up repos for "in-house" distros (which we certainly thought was serious...) but those tools are already a separate binary package (apt-utils) so factoring them out at the source level wouldn't be particularly troublesome. (I was going to add that there are also nicer tools that have turned up in the last 10 years but the couple of examples I looked at depend on apt-utils, oops)
I know you can make configure-time decisions based on the architecture and ship a leaner apt-utils on a legacy platform, but it's not as obvious as "oh yeah that thing is fully auxiliary and in a totally different codebase".
I understand, but the comment to which I was replying implied that this keeps happening, and in general. That’s not fair to the N-1 other DDs who aren’t the subject of this LWN article (which I read!)
The most interesting criticism / idea in the article was that the parts that are intended for Rust-ification should actually be removed from core apt.
> it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats [...] from APT entirely. It is only needed for two tools, apt-ftparchive and apt-extracttemplates [...]
Another interesting, although perhaps tangential, criticism was that the "new solver" currently lacks a testsuite (unit tests; it has integration tests). I'm actually kind of surprised that writing a dependency solver is a greenfield project instead of using an existing one. Or is this just a dig at something that pulls in a well-tested external module for solving?
Given that Cargo is written in Rust, you would think there would be at least one battle tested solver that could be used. Perhaps it was harder to extract and make generic than write a new one?
Cargo's solver incorporates concepts that .debs don't have, like Cargo features, and I'm sure that .debs have features that Cargo packages don't have either.
Historically apt hasn't had much of a "solver". It's basically take the user's upgrade/install action, if there's some conflict or versioned requirement, go to the candidate (≈newest barring pinfile shenanigans) of the involved packages, and if there's still a conflict, bail.
It was always second-tier utilities like Aptitude that tried to search for a "solution" to conflicting packaging constraints, but this has always been outside of the core functionality, and if you accepted one of Aptitude's proposed paths, you would do so knowing that the next apt dist-upgrade was almost certainly going to hose everything again.
I think the idea in Apt-world is that it's the responsibility of the archive maintainer to at all times present a consistent index for which the newest versions of everything can coexist happily together. But this obviously breaks down when multiple archives are active on the same apt conf.
Unit tests does not make Software Engineering. That's simply part of the development phase, which should be the smallest phase out of all the phases involved in REAL Software Engineering, which is rarely even done these days, outside of DO-178 (et al) monotony. The entire private-to-public industry has even polluted upper management in defense software engineering into accepting SCRUM as somehow more desirable than the ability to effectively plan your requirements and execute without deviation. Yes it's possible, and yes it's even plausible. SWE laziness turns Engineers into developers. Running some auto-documentation script or a generic non-official block diagram is not the same as a Civil PE creating blueprints for a house, let alone a mile long bridge or skyscraper.
As far as I understand the idea behind scrum it's not that you don't plan, it's that you significantly shorten the planning-implementation-review cycle.
Perhaps that is the ideal when it was laid out, but the reality of the common implementation is that planning is dispensed with. It gives some management a great excuse to look no further than the next jira ticket, if that.
The ideal implementation of a methodology is only relevant for a small number of management who would do well with almost any methodology because they will take initiative to improve whatever they are doing. The best methodology for wide adoption is the one that works okay for the largest number of management who struggle to take responsibility or initiative.
That is to say, the methodology that requires management to take responsibility in its "lowest energy state" is the best one for most people-- because they will migrate to the lowest energy state. If the "lowest energy state" allows management to do almost nothing, then they will. If the structure allows being clueless, a lot of managers will migrate to pointy haired Dilbert manager cluelessness.
With that said; I do agree with getting products to clients quickly, getting feedback quickly, and being "agile" in adapting to requirements; but having a good plan based on actual knowledge of the requirements is important. Any strict adherence to an extreme methodology is probably going to fail in edge cases, so having the judgement of when to apply which methodology is a characteristic of good management. You've got to know your domain, know your team, and use the right tool for the job.
I've got a bridge to sell. It's made from watered-down concrete and comes with blueprints written on site. It was very important to get the implementation started asap to shorten the review cycle.
Nonsense. I know and talk to multiple Engineers all the time and they all envy our position of continuing to fix issues in the project.
Mechanical engineers having to work around other component failures all the time because their lead time is gigantic and no matter how much planning they do, failures still pop-up.
The idea that Software Engineering has more bugs is absurd. Electronic engineers, mechanical, electric, all face similar issues to what we face and normally don't have the capacity to deploy fixes as fast as we do because of real world constraints.
I think you are being reductive on your original comment. The idea of cycling planning and implementation is nothing new, and quite used on the other disciplines. Saying that agile is the problem is misguided and pointing to other engineering disciplines for "they do it better" is usually a sign that you don't talk to those engineers.
Of course we can plan things better, but implementation does inform planning and vice versa and denying that is denying reality.
Integration tests are still tests. There are definitely cases for tools where you can largely get by without unit tests in favor of integration tests. I've written a lot of code generation tools this way for instance.
Unit tests are for testing branchiness— what happens in condition X, what about condition Y? Does the internal state remain sane?
Integration tests are for overall sanity— do a few happy paths basically work? what about when we make changes to the packaging metadata or roll dependencies forward?
Going unit-test free makes total sense in the case of code that doesn't have much in the way of branching, or where the exceptional cases can just be an uncontrolled exit. Or if you feel confident that your type system's unions are forcing you to cover your bases. Either way, you don't need to test individual functions or modules if running the whole thing end to end gives you reasonable confidence in those.
I didn't say they're not. Integration tests definitely help towards "being tested".
> There are definitely cases for tools where you can largely get by without unit tests in favor of integration tests.
Very strong disagree. I think there are no cases where a strong integration test regime can allow a software project to forego unit tests.
Now, that said, we're probably talking the same thing with different words. I think unit tests with mocks are practically useless. But mocks are the definition of most people's unit tests. Not to me; to me unit tests use real code and real objects. To me, a unit test is what a lot of people call an integration test. And, to me, what I call an integration test, is often what people call system tests or end-to-end tests.
> I think unit tests with mocks are practically useless
IMO that's on the extreme side too. I've seen a fair share of JUnit monstrosities with 10+ mocks injected "because the project has been written this way so we must continue this madness", but mocking can be done right, it's just overused so much that, well, maybe you're right - it's easier to preach it out than teach how to do it right.
No, because some things that are UB in C are not in Rust, and vice versa, so any codegen has to account for that and will result in additional verbosity that you wouldn't see in "native" code.
Every time I consider learning Rust, I am thrown back by how... "janky" the syntax is. It seems to me that we ought to have a system-level language which builds upon the learnings of the past 20+ years. Can someone help me understand this? Why are we pushing forward with a language that has a Perl-esque unreadability...?
Comparison: I often program in Python (and teach it) - and while it has its own syntax warts & frustrations - overall the language has a "pseudocode which compiles" approach, which I appreciate. Similarly, I appreciate what Kotlin has done with Java. Is there a "Kotlin for Rust"? or another high quality system language we ought to be investing in? I genuinely believe that languages ought to start with "newbie friendliness", and would love to hear challenges to that idea.
You might this blog post interesting, which argues that it's Rust semantics and not syntax that results in the noisiness, i.e.: it's intrinsic complexity:
I found it reasonably convincing. For what it's worth, I found Rust's syntax quite daunting at first (coming from Python as well), but it only took a few months of continuous use to get used to it. I think "Perl-esque" is an overstatement.
It has some upsides over Python as well, notably that the lack of significant whitespace means inserting a small change and letting the autoformatter deal with syntax changes is quite easy, whereas in Python I occasionally have to faff with indentation before Black/Ruff will let me autoformat.
I appreciate that for teaching, the trade-offs go in the other direction.
I'm not sure which of the dozen Rust-syntax supporters I should reply to, but consider something like these three (probably equivalent) syntaxes:
let mut a = Vec::<u32>::new();
let mut b = <Vec::<u32>>::new();
let mut c = <Vec<u32>>::new();
let mut d: Vec<u32> = Vec::new();
Which one will your coworker choose? What will your other corworkers choose?
This is day one stuff for declaring a dynamic array. What you really want is something like:
let mut z = Vec<u32>::new();
However, the grammar is problematic here because of using less-than and greater-than as brackets in a type "context". You can explain that as either not learning from C++'s mistakes or trying to appeal to a C++ audience I guess.
Yes, I know there is a `vec!` macro. Will you require your coworkers to declare a similar macro when they start to implement their own generic types?
There are lots of other examples when you get to what traits are required to satisfy generics ("where clauses" vs "bounds"), or the lifetime signature stuff and so on...
You can argue that strong typing has some intrinsic complexity, but it's tougher to defend the multiple ways to do things, and that WAS one of Perl's mantras.
Being able to use disambiguated syntaxes, and being able to add extra brackets, isn't an issue.
PS. The formatting tooling normalizes your second and third example to the same syntax. Personally I think it ought to normalize both of them to the first syntax as well, but it's not particularly surprising that it doesn't because they aren't things anyone ever writes.
It's really not. Only one of my examples has the equivalent of superfluous parens, and none are dereferencing anything. And I'm not defending C or C++ anyways.
When I was trying to learn Rust (the second time), I wanted to know how to make my own types. As such, the macro `vec!` mentioned elsewhere isn't really relevant. I was using `Vec` to figure things out so I could make a `FingerTree`:
let v: Vec<u32> = Vec::new(); // Awfully Java-like in repeating myself
let v = Vec::new(); // Crap, I want to specify the type of Vec
let v = Vec<u32>::new(); // Crap, that doesn't compile.
> let v = Vec::new(); // Crap, I want to specify the type of Vec
This kinda implies you've gone wrong somewhere. That doesn't mean there aren't cases where you need type annotations (they certainly exist!) but that if `Vec::new()` doesn't compile because the compiler couldn't deduce the type, it implies something is off with your code.
It's impossible to tell you exactly what the problem was, just that `<Vec<T>>::new()` is not code that you would ever see in a Rust codebase.
It's funny to be lumped in with "the Rust-syntax supporters", because I don't actually like it very much aesthetically speaking :^)
I mostly agree with your points. The angle brackets were definitely a mistake. The fact there are multiple ways to do it mostly comes down to the fact Rust bet big on type inference of local variables for ergonomics (which I've always been on the fence about myself).
So why do I defend Rust's syntax if I don't especially like it? I basically think we need to look at the bigger picture. C++ parsing is notoriously awful. It is extremely context sensitive, requires semantic information at parse level, it has serious quirks that have caused serious bugs (e.g.: vexing parse, dangling else), and in fact is literally undecidable! In contrast, Rust's grammar is extremely consistent and (almost) totally context-free. It may make choices that are aethestically debatable, it may look noisy if you're not used to it (and maybe this even impacts readability, especially for beginners), but these feel like rather minor concerns.
For me, the _main_ design goal for syntax is to not cause major headaches for either compiler authors or language users, and I would say Rust ticks those boxes. I'm reticent to generalise my own experience, but I suspect that most people that find Rust ugly at first blush would get used to it pretty quickly, and that was really my main point!
I've only ever seen `a` and `d`. Personally I prefer `a`. The only time I've seen `c` is for trait methods like `<Self as Trait<Generic>>::func`. Noisy? I guess. Not sure how else this could really be written.
Fwiw, I didn't go looking for obscure examples to make HN posts. I've had three rounds of sincerely trying to really learn and understand Rust. The first was back when pointer types had sigils, but this exact declaration was my first stumbling block on my second time around.
The first version I got working was `d`, and my first thought was, "you're kidding me - the right hand side is inferring it's type from the left?!?" I didn't learn about "turbo fish" until some time later.
> The first version I got working was `d`, and my first thought was, "you're kidding me - the right hand side is inferring it's type from the left?!?" I didn't learn about "turbo fish" until some time later.
Tbh d strikes me as the most normal - right hand sides inferring the type from the left exists in basically every typed language. Consider for instance the C code
Doing this inference at a distance is more of a feature of the sml languages (though I think it now exists even in C with `auto`) - but just going from left to right is... normal.
I see your point, and it's a nice example, but not completely parallel to the Rust/StandardML thing. Here, your RHS is an initializer, not a value.
// I don't think this flies in C or C++,
// even with "designated initializers":
f({ .flag = true, .value = 123, .stuff=0.456});
// Both of these "probably" do work:
f((some_struct){ .flag = true, ... });
f(some_struct{ .flag = true, ... });
// So this should work too:
auto a = (some_struct){ .flag = true, ... };
Take all that with a grain of salt. I didn't try to compile any of it for this reply.
Anyways, I only touched SML briefly 30 some years ago, and my reaction to this level of type inference sophistication in Rust went through phases of initial astonishment, quickly embracing it, and eventually being annoyed at it. Just like data flows from expressions calculating values, I like it when the type inference flows in similarly obvious ways.
exactly. you specify types for function parameters and structs and let the language do it's thing. it's a bit of a niche to specify a type within a function...
There is a reason the multiple methods detailed above exist. Mostly for random iterator syntax. Such as summing an array or calling collect on an iterator. Most Rust devs probably don't use all of these syntax in a single year or maybe even their careers.
I can't believe that a flexible powerful syntax is considered limiting or confusing by some people. There is way more confusing edge-case syntax keywords in C++ that are huge foot-guns.
I mean, the fact that you mention "probably equivalent" is part of the reality here: Nobody writes the majority of these forms in real code. They are equivalent, by the way.
In real code, the only form I've ever seen out of these in the wild is your d form.
This is some True Scotsman style counter argument, and it's hard for me to make a polite reply to it.
There are people who program with a "fake it till you make it" approach, cutting and pasting from Stack Overflow, and hoping the compiler errors are enough to fix their mess. Historically, these are the ones your pages/books cater to, and the ones who think the borrow checker is the hard part. It doesn't surprise me that you only see code from that kind of beginner and experts on some rust-dev forum and nothing in between.
The issue though is that this isn't a solvable "problem". This is how programming languages' syntax work. It's like saying that C's if syntax is bad because these are equivalent:
if (x > y) {
if ((x > y)) {
if (((x) > (y))) {
Yes, one of your co-workers may write the third form. But it's just not possible for a programming language to stop this from existing, or at least, maybe you could do it, but it would add a ton of complexity for something that in practice isn't a problem.
Well, the solution usually isn't in syntax, but it often is solved by way of code formatters, which can normalize the syntax to a preferred form among several equivalent options.
I suspect rustfmt would consider this out of scope, but there should be a more... "adventurous" code formatter that does more opinionated changes. On the other hand, you could write a clippy lint today and rely on rustfix instead
Only `b` has the equivalent of "superfluous parens".
It's practically your job to defend Rust, so I don't expect you to budge even one inch. However, I hate the idea of letting you mislead the casual reader that this is somehow equivalent and "just how languages work".
The grammar could've used `Generic[Specific]` with square brackets and avoided the need for the turbo fish.
It hasn't been my job to work on Rust in for years now. And even then, it was not to "defend" Rust, but to write docs. I talk about it on my own time, and I have often advocated for change in Rust based on my conversations with users.
If you're being overly literal, yes, the <>s are needed here for this exact syntax. My point was not about this specific example, it's that these forms are equivalent, but some of them are syntactically simpler than others. The existence of redundant forms does not make the syntax illegitimate, or overly complex.
For this specific issue, if square brackets were used for generics, then something else would have to change for array indexing, and folks would be complaining that Rust doesn't do what every other language does here, which is its own problem.
A compiler could disambiguate, but the goal is to have parsing happen without knowing if A is a type or a variable. That is the inappropriate intertwining of parsing and semantics that languages are interested in getting away from, not continuing with.
Anyway, just to be clear: not liking the turbofish is fine, it's a subjective preference. But it's not an objective win, that's all I'm saying. And it's only one small corner of Rust's syntax, so I don't think that removing it would really alleviate the sorts of broad objections that the original parent was talking about.
The problem here is that angle brackets are semantics dependent syntax. Whether they are brackets or not depends on semantic context. Conversely square brackets are always brackets.
Square brackets would be semantically dependent if they appeared in the same position of angle brackets. There's nothing magical about [] that makes the problems with <> disappear.
I think Perl-esque is apt, but that's because I've done quite a bit of Perl and think the syntax concerns are overblown. Once you get past the sigils on the variables Perl's syntax is generally pretty straightforward, albeit with a few warts in places like almost every language. The other area where people complained about Perl's opaqueness was the regular expressions, which most languages picked up anyway because people realized just how useful they are.
Once you're writing Rust at full speed, you'll find you won't be putting lifetimes and trait bounds on everything. Some of this becomes implicit, some of it you can just avoid with simpler patterns.
When you write Rust code without lifetimes and trait bounds and nested types, the language looks like Ruby lite.
When you write Rust code with traits or nested types, it looks like Java + Ruby.
When you sprinkle in the lifetimes, it takes on a bit of character of its own.
It honestly isn't hard to read once you use the language a lot. Imagine what Python looks like to a day zero newbie vs. a seasoned python developer.
You can constrain complexity (if you even need it) to certain modules, leaving other code relatively clean. Imagine the Python modules that use all the language features - you've seen them!
One of the best hacks of all: if you're writing HTTP services, you might be able to write nearly 100% of your code without lifetimes at all. Because almost everything happening in request flow is linear and not shared.
I'm trying to sell Rust to someone who is worried about it. I'm not trying to sound elitist. I want people to try it and like it. It's a useful tool. I want more people to have it. And that's not scaring people away.
Rust isn't as hard or as bad as you think. It just takes time to let it sink in. It's got a little bit of a learning curve, but that pain goes away pretty quick.
Once you've paid that down, Rust is friendly and easy. My biggest gripe with Rust is compile times with Serde and proc macros.
I think this depends a LOT on what you're trying to do and what you need to learn to do it. If you can get by with the std/core types and are happy with various third party crates, then you don't really need to learn the language very deeply.
However, if you want to implement new data structures or generic algorithms, it gets very deep very quickly.
That article is really good, because it highlight that Rust doesn't have to look messy. Part of the problem, I think, is that there's a few to many people who think that messy version is better, because it "uses more of the language" and it makes them look smarter. Or maybe Rust just makes it to hard to see through the semantics and realize that just because feature is there doesn't mean that you need it.
There's also a massive difference between the type of C or Perl someone like me would write, versus someone trying to cope with a more hostile environment or who requires higher levels of performance. My code might be easier to read, but it technically has issue, they are mostly not relevant, while the reverse is true for a more skilled developer, in a different environment. Rust seems to attract really skilled people, who have really defensive code styles or who use more of the provided language features, and that makes to code harder to read, but that would also be the case in e.g. C++.
> I am thrown back by how... "janky" the syntax is.
Well if you come from C++ it's a breath of fresh air! Rust is like a "cleaned-up" C++, that does not carry the historical baggage forced by backwards compatibility. It is well-thought out from the start. The syntax may appear a bit too synthetic; but that's just the first day of use. If you use it for a few days, you'll soon find that it's a great, beautiful language!
The main problem with rust is that the community around it has embraced all the toxic traditions of the js/node ecosystem, and then some. Cargo is a terrifying nightmare. If you could install regular rust dependencies with "apt install" in debian stable, that would be a different story! But no. They want the version churn: continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole.
Concerning TFA, adding rust to apt might be a step in the right direction. But it should be symmetric: apt depends on rust, that's great! But all the rust that it depends on needs to be installed by apt, and by apt alone!
I am coming from C++ and think Cargo is a blessing.
I like that I can just add a dependency and be done instead of having to deal with dependencies which require downloading stuff from the internet and making them discoverable for the project specific tool chain - which works differently on every operating system.
While it kinda flies under the radar, most modern C projects do have a kind of package management solution in the form of pkg-config. Instead of the wild west of downloading and installing every dependency and figuring out how to integrate it properly with the OS and your project you can add a bit of syntactic sugar to your Makefile and have that mostly handled for you, save for the part where you will need to use your platform's native package manager to install the dependencies first. On a modern system using a package on a C project just requires a Makefile that looks something like this:
This is a real problem but I wouldn't blame the existence of good tooling on it.
Sure you don't have this issue with C or C++, but thats because adding even a single dependency to a C or C++ project sucks, the tooling sucks.
I wholly blame developers who are too eager to just pull new dependencies in when they could've just written 7 lines themselves.
I remember hearing a few years ago about how developers considered every line of code the wrote as a failing and talked about how modern development was just gluing otherwise maintained modules together to avoid having to maintain their own project. I thought this sounded insane and I still do.
And in a way I think AI can help here, where instead you get just the snippet vs having to add that dep that then becomes a long-term security liability
On the other hand you don't have developers handrolling their own shitty versions of common things like hashmaps or json-serializers, just because the dependencies are to hard to integrate.
> The main problem with rust is that the community around it has embraced all the toxic traditions of the js/node ecosystem, and then some. Cargo is a terrifying nightmare. If you could install regular rust dependencies with "apt install" in debian stable, that would be a different story! But no. They want the version churn: continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole.
Something I didn't appreciate for a long time is that, the C/C++ ecosystem does have an npm-like package management ecosystem - it is just implemented at the level of Linux distro maintainers DDD deciding what to package and how. Which worked ok because C was the lingua franca of Unix systems.
But actually it's valuable for programmers to be able to specify their dependencies for their own projects and update them on a schedule unconnected and uncoordinated with the OS's releases. The cargo/npm model is closer to ideal.
Of course what is even better is NixOS-like declarative specification and hashing of all dependencies
As a c/c++ cmake user, cargo sounds like a utopia in comparison. It still amazes me that c/c++ package management is still spread between about 5 different solutions.
IMO, the biggest improvement to C/C++ would be ISO defining a package manager a.la pip or uv or cargo. I'm so tired of writing cmake. just... tired.
People that don't understand make are destined to recreate it poorly, and there's no better example than cmake, imho.
Here's my arc through C/C++ build systems:
- make (copy pasted examples)
- RTFM [1]
- recursive make for all sorts of non-build purposes - this is as good as hadoop up to about 16 machines
- autotools
- cmake
- read "recursive make considered harmful" [2]
- make + templates
Anyway, once you've understood [1] and [2], it's pretty hard to justify cmake over make + manual vendoring. If you need windows + linux builds (cmake's most-advertised feature), you'll pretty quickly realize the VS projects it produces are a hot mess, and wonder why you don't just maintain a separate build config for windows.
If I was going to try to improve on the state of the art, I'd clean up a few corner cases in make semantics where it misses productions in complicated corner cases (the problems are analogous to prolog vs datalog), and then fix the macro syntax.
If you want a good package manager for C/C++, check out Debian or its derivatives. (I'm serious -- if you're upset about the lack of packages, there's a pretty obvious solution. Now that docker exists, the packages run most places. Support for some sort of AppImage style installer would be nice for use with lesser distros.)
cmake exists not because people didn't understand make, but because there was no one make to understand. The "c" is for "cross platform." It's a replacement for autoconf/automake, not a replacement for make.
> If I was going to try to improve on the state of the art
cmake is a self-inflicted problem of some C++ users, and an independent issue of the language itself (just like cargo for rust). If you want, you can use a makefile and distribution-provided dependencies, or vendored dependencies, and you don't need cmake.
imo the biggest single problem with C++ that the simple act of building it is not (and it seems, cannot) be standardized.
This creates kind of geographic barriers that segregate populations of C++ users, and just like any language, that isolation begets dialects and idioms that are foreign to anyone from a different group.
But the stewards of the language seem to pretend these barriers don't exist, or at least don't understand them, and go on to make the mountain ranges separating our valleys even steeper.
So it's not that CMake is a self-inflicted wound. It's the natural evolution of a tool to fill in the gaps left under specified by the language developers.
> If you could install regular rust dependencies with "apt install" in debian stable, that would be a different story! But no. They want the version churn: continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole.
I don't know, it doesn't explain how and why Cargo causes "continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole."
They are conflating unrelated things. Cargo is a downstream result of the thing that annoys them, not the cause. What they don’t like is that rust is statically linked with strong versioned dependencies. There are pros and cons to that, but one outcome (which some list as pro and some list as con) is that you need to recompile world for every project. Hence, cargo.
Except they got the order of type and variable wrong. That alone is enough reason to never use Rust, Go, TypeScript or any other language that botches such a critical cornerstone of language syntax.
> Comparison: I often program in Python (and teach it) - and while it has its own syntax warts & frustrations - overall the language has a "pseudocode which compiles" approach, which I appreciate.
I think this is why you don’t like Rust: In Rust you have to be explicit by design. Being explicit adds syntax.
If you appreciate languages where you can write pseudocode and have the details handled automatically for you, then you’re probably not going to enjoy any language that expects you to be explicit about details.
As far as “janky syntax”, that’s a matter of perspective. Every time I deal with Python and do things like “__slots__” it feels like janky layer upon layer of ideas added on top of a language that has evolved to support things it wasn’t originally planned to do, which feels janky to me. All of the things I have to do in order to get a performant Python program feel incredibly janky relative to using a language with first class support for the things I need to do.
Not what they are talking about. Rather better to use words instead of symbols, like python over perl.
Instead of “turbofish” and <‘a>, there could be more key words like mut or dyn.
Semicolons and ‘c’har are straight out of the seventies as well. :: not useful and ugly, etc.
Dunders avoid namespace collisions and are not a big problem in practice, all one char, and easy to read. I might remove the trailing part if I had the power.
Python using indenting to convey specific programming meaning feels janky and outdated to people not familiar with Python, but Python familiar programmers don't think twice about it.
Maybe it's my math background but I honestly prefer symbols to keywords. It's more up front cost in learning, but it's much more efficient in the long run.
When you are doing multiple operations to multiple variables, and need to see it all at once, math-like syntax still has benefits.
But this is not the common case for most programming, which is about detailing business rules. Explicit and verbose (though not excessively) has been shown to be the most readable/maintainable. For example, one character variable names, common in math, are heavily discouraged in professional development.
There’s another level to this as well. To me, calculus notation looks quite elegant, while Perl and (parts of) Rust look like trash. Since they are somewhat similar, the remaining angle is good taste.
Both Python and JS evolved by building on top of older versions, but somehow JS did a way better job than Python, even though Py forced a major breaking change.
Agree about Rust, all the syntax is necessary for what it's trying to do.
Are the many who disagree that it is unreadable more than the people who agree? I have been involved with the language for a while now, and while I appreciate what you and many others have done for it, the sense that the group is immune to feedback just becomes too palpable too often. That, and the really aggressive PR.
Rust is trying to solve a really important problem, and so far it might well be one of the best solutions we have for it in a general sense. I 100% support its use in as many places as possible, so that it can evolve. However, its evolution seems to be thwarted by a very vocal subset of its leadership and community who have made it a part of their identity and whatever socio-political leverage toolset they use.
I've found the rust core team to be very open to feedback. And maybe I've just been using Rust for too long, but the syntax feels quite reasonable to me.
Just for my own curiosity, do you have an examples of suggestions for how to improve the syntax that have been brought up and dismissed by the language maintainers?
> Are the many who disagree that it is unreadable more than the people who agree?
I have no way to properly evaluate that statement. My gut says no, because I see people complain about other things far more often, but I do think it's unknowable.
I'm not involved with Rust any more, and I also agree with you that sometimes Rust leadership can be insular and opaque. But the parent isn't really feedback. It's just a complaint. There's nothing actionable to do here. In fact, when I read the parent's post, I said "hm, I'm not that familiar with Kotlin actually, maybe I'll go check it out," loaded up https://kotlinlang.org/docs/basic-syntax.html, and frankly, it looks a lot like Rust.
But even beyond that: it's not reasonably possible to change a language's entire syntax ten years post 1.0. Sure, you can make tweaks, but turning Rust into Python simply is not going to happen. It would be irresponsible.
Rust is almost git hyoe 2.0. That hyoe set the world up with (a) a dominant VCS that is spectacularly bad at almost everything it does compared to its competitors and (b) the dominant Github social network owned by MS that got ripped to train Copilot.
Developers have a way of running with a hyoe that can be quite disturbing and detrimental in the long run. The one difference here is that rust has some solid ideas implemented underneath. But the community proselytizing and throwing non-believers under the bus is quite real.
The lifetime syntax was taken from OCaml but it has somewhat different semarics than OCaml. I honestly get a bit tripped up when I look at OCaml code (a language I'm a beginner at), and see ordinary parameterized types using syntax that suggests to me, from a Rust background, "woah, complex lifetime situation ahead!"
I know that Graydon Hoare is a fan of OCaml and that it was a core inspiration for Rust, and I sometimes wonder if he gets tripped up too by having to switch between Rust-inspired and OCaml-inspired interpretations of the same characters.
It's similar but different: both are type variables, but it's true that it's used for the "other" type variables in Rust.
For what it's worth, I am not even sure that Graydon was the one who introduced lifetime syntax. He was a fan of terseness, though: Rust's keywords used to be all five characters or shorter.
I don't think it's an awful choice, but I'll admit that pipes in lambdas are not my favorite bit of syntax. I'm not a fan of them in Ruby either. I personally prefer JavaScript-ish => for lambdas. But I'm not gonna try to bikeshed one syntax decision made over a decade ago that has relatively minor consequences for other parts of the language. The early Rust core team had different taste than I do essentially, and that's fine.
Rust's pipes in lambdas come from Ruby, a language that's often regarded as having beautiful syntax.
Rust is objectively not mojibake. The equivalent here would be like using a-z, as Rust's syntax is borrowed from other languages in wide use, not anything particularly esoteric. (Unless you could OCaml as esoteric, which I do believe is somewhat arguable but that's only one thing, the argument still holds for the vast majority of the language.)
I would encourage you to give it a try anyways. Unfamiliar syntax is off-putting for sure, but you can get comfortable with any syntax.
Coming from Python, I needed to work on some legacy Perl code. Perl code looks quite rough to a new user. After time, I got used to it. The syntax becomes a lot less relevant as you spend more time with the language.
Once one does spend some time to become comfortable with the language, that feeling of messiness with unfamiliar syntax fades away. That's the case with any unfamiliar language, not just Rust.
I used Rust for a year and still wasn't used to the syntax, though this was v1.0 so idk what changed. I see why it's so complicated and would definitely prefer it over C or Cpp, but wouldn't do higher-level code in it.
I’ve been writing python professionally for over 10 years. In the last year I’ve been writing more and most Rust. At first I thought the same as you. It’s a fugly language, there’s no denying it. But once I started to learn what all the weird syntax was for, it began to ruin Python for me.
Now I begrudge any time I have to go back to python. It feels like its beauty is only skin deep, but the ugly details are right there beneath the surface: prolific duck typing, exceptions as control flow, dynamic attributes. All these now make me uneasy, like I can’t be sure what my code will really do at runtime.
Rust is ugly but it’s telling you exactly what it will do.
>Now I begrudge any time I have to go back to python. It feels like its beauty is only skin deep, but the ugly details are right there beneath the surface: prolific duck typing, exceptions as control flow, dynamic attributes. All these now make me uneasy, like I can’t be sure what my code will really do at runtime.
I feel like this sentiment is from people who haven't really took the time to fully see what the Python ecosystem is.
Any language can have shittly written code. However languages that by default disallow it means that you have to spend extra time prototyping things, whereas in Python, you can often make things work without much issue. Dynamic typing and attributes make the language very flexible and easily adaptable.
Oh I’m familiar with the ecosystem. Yes the dynamic nature does make it easy to prototype things flexibly. The problem is when your coworker, or you, decide to flexibly and dynamically get the job on a Friday before a long weekend and then 3 months later you need to figure out how a variable is being set, or where a method is being called.
And thats no different than writing Rust with a bunch of unsafes, and a bunch of indirection as far as processing flow goes.
The nice thing about Python is that it allows you to do either. And naturally, Python has gotten much faster, to the point where its as fast as Java for some things, because when you don't use dynamic typing, it actually recognizes this and optimizes compiled code without having to carry that type information around.
It’s not the same at all. In Rust you cannot just throw an attribute on to a struct in the middle of a function because it makes some call further down the chain easier, no matter how much unsafe you use.
I’m not a python hater, you can’t get some great stuff done with it quickly. But my confidence in writing large complex systems in it is waning.
Seems like a fairly decent syntax. It’s less simple than many systems languages because it has a very strong type system. That’s a choice of preference in how you want to solve a problem.
I don’t think the memory safety guarantees of Rust could be expressed in the syntax of a language like C or Go.
I code mostly in Go and the typing sloppiness is a major pain point.
Example: You read the expression "x.f", say, in the output of git-diff. Is x a struct object, or a pointer to a struct? Only by referring to enclosing context can you know for sure.
Is ML a systems language? Sorry, maybe my definition is wrong, but I consider a systems language something that’s used by a decent amount of OS’es, programming languages and OS utilities.
I assume you’re talking about OCaml et al? I’m intruiged by it, but I’m coming from a Haskell/C++ background.
Rust is somewhat unique in terms of system language this because it’s the first one that’s not “simple” like C but still used for systems tools, more than Go is as far as I’m aware.
Which probably has to do with its performance characteristics being close to the machine, which Go cannot do (ie based on LLVM, no GC, etc)
One of the design goals of rust is explicitness. I think if Rust had type elision, like many other functional languages, it would go a long way to cleaning up the syntax.
Maybe I've Stockholm'd myself, but I think Rust's syntax is very pleasant. I also think a lot of C code looks very good (although there is some _ugly_ C code out there).
Sometimes the different sets of angle and curly brackets adding up can look ugly at first, and maybe the anonymous function syntax of || {}, but it grows on you if you spend some time with the language (as do all syntaxes, in my experience).
The family of languages that started with ML[0] mostly look like this. Studying that language family will probably help you feel much more at home in Rust.
Many features and stylistic choices from ML derivatives have made their way into Swift, Typescript, and other non-ML languages.
I often say that if you want to be a career programmer, it is a good idea to deeply learn one Lisp-type language (which will help with stuff like Python), one ML-type language (which will help with stuff like Rust) and one C-type language (for obvious reasons.)
You might enjoy https://nim-lang.org/ which has a Python-like syntax with even more flexibility really (UFCS, command-like calls, `fooTemplate: stuff` like user-defined "statements", user-defined operators, term-rewriting macros and more. With ARC it's really just about as safe as Rust and most of the stdlib is fast by default. "High quality" is kind of subjective, but they are often very welcoming of PRs.
Anyway, to your point, I think a newbie could pick up the basics quickly and later learn more advanced things. In terms of speed, like 3 different times I've compared some Nim impl to a Rust impl and the Nim was faster (though "at the extreme" speed is always more a measure of how much optimization effort has been applied, esp. if the language supports inline assembly).
https://cython.org/ , which is a gradually typed variant of Python that compiles to C, is another decent possibility.
The sigils in Rust (and perl) are there to aid readability. After you use it a bit, you get used to ignoring them unless they look weird.
All the python programs I've had to maintain (I never choose python) have had major maintainability problems due to python's clean looking syntax. I can still look at crazy object oriented perl meta-programming stuff I wrote 20 years ago, and figure out what it's doing.
Golang takes another approach: They impoverished the language until it didn't need fancy syntax to be unambiguously readable. As a workaround, they heavily rely on codegen, so (for instance) Kubernetes is around 2 million lines of code. The lines are mostly readable (even the machine generated ones), but no human is going to be able to read them at the rate they churn.
Anyway, pick your poison, I guess, but there's a reason Rust attracts experienced systems programmers.
Kotlin programmer here who is picking up Rust recently. you're right, it's no Kotlin when it comes to the elegance of APIs but it's also not too bad at all.
In fact there are some things about the syntax that are actually nice like range syntax, Unit type being (), match expressions, super explicit types, how mutability is represented etc.
I'd argue it's the most similar system level language to Kotlin I've encountered. I encourage you to power through that initial discomfort because in the process it does unlock a level of performance other languages dream of.
I prefer Rust syntax to Python's purely on the grounds that Rust is a curly-brace language and Python is an indentation-sensitive language. I like it when the start and end of scopes in code are overtly marked with a non-whitespace character, it reduces the chances of bugs caused by getting confused about what lines of code are in what scope and makes it easier to use text editor tools to move around between scopes.
Beyond that issue, yeah most of Rust's syntactic noise comes from the fact that it is trying to represent genuinely complicated abstractions to support statically-checked memory safety. Any language with a garbage collector doesn't need a big chunk of Rust's syntax.
I don’t program much in Rust, but I find it a beautiful syntax… they took C++ and made it pretty much strictly better along with taking some inspiration from ML (which is beautiful imo)
Have you considered that part of it is not the language but the users?
I'm learning rust and the sample code I frequently find is... cryptically terse. But the (unidiomatic, amateurish) code I write ironically reads a lot better.
I think rust attracts a former c/c++ audience, which then bring the customs of that language here. Something as simple as your variable naming (character vs c, index vs i) can reduce issues already.
It doesn't have the single feature that anyone cares about in Rust - compiler-enforced ownership semantics. And it's not in any way a system-level language (you couldn't use it without its stdlib for example, like in the Linux kernel).
The other features it shares with Rust are also shared by many other languages.
But Swift is not "Kotlin for Rust" though, I can't see the connection at all. "Kotlin for Rust" would be a language that keeps you in the Rust ecosystem.
The commenter I replied to seems to like Kotlin. Swift is extremely close to Kotlin in syntax and features, but is not for the JVM. Swift also has a lot of similarities with Rust, if you ignore the fact that it has a garbage collector.
A Kotlin for Rust would be a drop-in replacement where you could have a crate or even just a module written in this hypothetical language and it just works. No bridging or FFI. That’s not Swift.
As an official greybeard who has written much in C, C++, Perl, Python, and now Rust, I can say Rust is a wonderful systems programming language. Nothing at all like Perl, and as others have mentioned, a great relief from C++ while providing all the power and low-level bits and bobs important for systems programming.
I would argue that anything that is not Lisp has a complicated syntax.
The question is: is it worth it?
With Rust for the answer is yes. The reliability, speed, data-race free nature of the code I get from Rust absolutely justifies the syntax quirks (for me!).
> Why are we pushing forward with a language that has a Perl-esque unreadability...?
The reason is the same for any (including Perl, except those meme languages where obfuscation is a feature) language: the early adopters don't think it's unreadable.
Aside from async/await which I agree is somewhat janky syntaxtically, I'm curious what you consider to be janky. I think Rust is overall pretty nice to read and write. Patterns show up where you want them, type inference is somewhat limited but still useful. Literals are readily available. UFCS is really elegant. I could go on.
Ironically, I find Python syntax frustrating. Imports and list comprehensions read half backwards, variable bindings escape scope, dunder functions, doc comments inside the function, etc.
What do people actually mean when they say "the syntax is janky"?
I often see comparisons to languages like Python and Kotlin, but both encode far less information on their syntax because they don't have the same features as Rust, so there's no way for them to express the same semantics as rust.
Sure, you can make Rust look simpler by removing information, but at that point you're not just changing syntax, you're changing the language's semantics.
Is there any language that preserves the same level of type information while using a less "janky" syntax?
> Every time I consider learning Rust, I am thrown back by how... "janky" the syntax is. It seems to me that we ought to have a system-level language which builds upon the learnings of the past 20+ years.
I said this years ago and I was basically told "skill issue". It's unreadable. I shudder to think what it's like to maintain a Rust system at scale.
The syntax has relatively little to do with how easy or hard it is to maintain a rust system at scale. If you get something wrong the compiler will alert you, and most of the syntax is there for good reasons that anyone maintaining any kind of software system at scale needs to understand (and indeed the syntax helps you be clear about what you mean to the compiler, which facilitates helpful compiler error messages if you screw something up when modifying code).
I'm writing this as a heavy python user in my day job. Python is terrible for writing complex systems in. Both the language and the libraries are full of footguns for the novice and expert alike. It has 20 years of baggage, the packaging and environment handling is nothing short of an unmitigated disaster, although uv seems to be a minor light at the end of the tunnel. It is not a simple language at this point. It has had so many features tacked on, that it needs years of use to have a solid understanding of all the interactions.
Python is a language that became successful not because it was the best in it's class, but because it was the least bad. It became the lingua franca of quantitative analysis, because R was even worse and matlab was a closed ecosystem with strong whiffs of the 80s. It became successful because it was the least bad glue language for getting up and running with ML and later on LLMs.
In comparison, Rust is a very predictable and robust language. The tradeoff it makes is that it buys safety for the price of higher upfront complexity. I'd never use Rust to do research in. It'd be an exercise in frustration. However, for writing reliable and robust systems, it's the least bad currently.
What's wrong with R? I used it and liked it in undergrad. I certainly didn't use it as seriously as the users who made Python popular, but to this day I remember R fondly and would never choose Python for a personal project.
My R use was self-taught, as well. I refused to use proprietary software for school all through high school and university, so I used R where we were expected to use Excel or MatLab (though I usually used GNU Octave for the latter), including for at least one or two math classes. I don't remember anything being tricky or difficult to work with.
R is the most haphazard programming environment I've ever used. It feels like an agglomeration of hundreds of different people's shell aliases and scripting one-liners.
I'll grant my only exposure has been a two- or three-day "Intro to R" class but I ran screaming from that experience and have never touched it again.
It maybe worked against me that I am a programmer, not a statistician or researcher.
When I used it I was a computer science student. But I wasn't reading anyone else's code or trying to maintain anything complex, which is why I asked what I did. I'm sure there are quirks I never had to deal with.
So is it just that the stdlib is really big and messy?
Legit question really. A comparative study on language readability using codes doing the same thing written idiomatically in different languages will be interesting. Beyond syntax, idioms/paradigm/familiarity should also play role.
nta you're replying to, but as someone who doesn't know rust, on first glance it seems like it's littered with too many special symbols and very verbose. as i understand it this is required because of the very granular low level control rust offers
maybe unreadable is too strong of a word, but there is a valid point of it looking unapproachable to someone new
> littered with too many special symbols and very verbose
This seems kinda self-contracticting. Special symbols are there to make the syntax terse, not verbose. Perhaps your issue is not with how things are written, but that there's a lot of information for something that seems simpler. In other words a lot of semantic complexity, rather than an issue with syntax.
I think the main issue people who don't like the syntax have with it is that it's dense. We can imagine a much less dense syntax that preserves the same semantics, but IMO it'd be far worse.
Using matklad's first example from his article on how the issue is more the semantics[1]
we can imagine a much less symbol-heavy syntax inspired by POSIX shell, FORTH, & ADA:
generic
type P is Path containedBy AsRef
public function read takes type Path named path returns u8 containedBy Vector containedBy Result fromModule io
function inner takes type reference to Path named path returns u8 containedBy Vector containedBy Result fromModule io
try
let mutable file = path open fromModule File
let mutable bytes = new fromModule Vector
try
mutable reference to bytes file.read_to_end
bytes Ok return
noitcnuf
path as_ref inner return
noitcnuf
and I think we'll all agree that's much less readable even though the only punctuation is `=` and `.`. So "symbol heavy" isn't a root cause of the confusion, it's trivial to make worse syntax with fewer symbols. And I like RPN syntax & FORTH.
That's the thing though... Rust does build on many of those learnings. For starters, managing a big type system is better when some types are implicit, so Rust features type inference to ease the burden in that area. They've also learned from C++'s mistake of having a context sensitive grammar. They learned from C++'s template nightmare error messages so generics are easier to work with. They also applied learnings about immutability being a better default that mutability. The reason Rust is statically linked and packages are managed by a central repository is based on decades of seeing how difficult it is to build and deploy projects in C++, and how easy it is to build and deploy projects in the Node / NPM ecosystem. Pattern matching and tagged unions were added because of how well they worked in functional languages.
As for "Perl-esque unreadability" I submit that it's not unreadable, you are just unfamiliar. I myself find Chinese unreadable, but that doesn't mean Chinese is unreadable.
> Is there a "Kotlin for Rust"?
Kotlin came out 16 years after Java. Rust is relatively new, and it has built on other languages, but it's not the end point. Languages will be written that build on Rust, but that will take some time. Already many nascent projects are out there, but it is yet to be seen which will rise to the top.
C++ is vastly more readable. I will never go back to writing or maintaining C++ projects, but drop me into a C++ file to review something and it is usually very easy to grok.
Part of this is style and conventions though. I have implemented an STL container before, and that templating hell is far worse than anything I’ve ever seen in the Rust ecosystem. But someone following modern C++ conventions (e.g. a Google library) produces very clean and readable code.
> It seems to me that we ought to have a system-level language which builds upon the learnings of the past 20+ years
I mean, Rust does. I builds on 20+ years of compiler and type system advancements. Then Syntax is verbose if you include all then things you can possibly do. If you stick to the basics it's pretty similar to most other languages. Hell, I'd say a lot of syntax Rust is similar to type-hinted Python.
Having said that, comparing a GC'd dynamic language to a systems programming language just isn't a fair comparison. When you need to be concerned about memory allocation you just need more syntax.
I think the problem for people is the traditional problem that a lot of people have had with a lot of languages since C took off: it doesn't look like ALGOL, and doesn't have the semantics of ALGOL.
The reason python looks like procedural pseudocode is because it was designed to look like procedural pseudocode. Rust is not just a new skin over ALGOL with a few opinionated idioms, it's actually different - mostly to give hints of intention to the compiler, but I don't even think it started from the same place. It's more functional than anything imo, but without caring about purity or appearance, and that resulted in something that superficially looks sufficiently ALGOL-like to confuse people who are used to python or Kotlin.
> I genuinely believe that languages ought to start with "newbie friendliness", and would love to hear challenges to that idea.
In conclusion, I think this is a red herring. Computer languages are hard. What you're actually looking for is something that is ALGOL-like for people who have already done the hard work of learning an ALGOL-like. That's not a newbie, though. Somebody who learned Rust first would make the same complaint about python.
What are you talking about? Rust’s function signature and type declaration syntaxes are extremely vanilla, unless you venture into some really extreme use cases with lots of lifetime annotations and generic bounds.
That's just a weird and unrealistic example, though. Like, why is process_handler taking an owned, boxed reference to something it only needs shared access to? Why is there an unnecessary 'a bound on handler?
In the places where you need to add lifetime annotations, it's certainly useful to be able to see them in the types, rather than relegate them to the documentation like in C++; cf. all the places where C++'s STL has to mention iterator and reference invalidation.
LLMs LOVE to write Rust like this. They add smart pointers, options and lifetimes everywhere when none of those things are necessary. I don’t know what it is, but they love over-engineering it.
As a first guess, they're trained on lots of social media and Q&A content. The former has lots of complaints about "look how complex rust is!" while the latter has lots of "help I've written very complex rust".
I agree that the signature for process_handler is weird, but you could steelman it to take a borrowed trait object instead, which would have an extra sigil.
The handler function isn't actually unnecessary, or at least, it isn't superfluous: by default, the signature would include 'a on self as well, and that's probably not what you actually want.
I do think that the example basically boils down to the lifetime syntax though, and yes, while it's a bit odd at first, every other thing that was tried was worse.
> The handler function isn't actually unnecessary, or at least, it isn't superfluous: by default, the signature would include 'a on self as well, and that's probably not what you actually want.
To clarify, I meant the 'a in `Box<dyn Handler + 'a>` in the definition of `process_handler` is unnecessary. I'm not saying that the <'a> parameter in the definition of Handler::handle is unnecessary, which seems to be what you think I said, unless I misunderstood.
Lifetimes really only come into play if you are doing something really obscure. Often times when I’m about to add lifetimes to my code I re-think it and realize there is a better way to architect it that doesn’t involve them at all. They are a warning sign.
While I don’t disagree that this is at first blush quite complex, using it as an example also obscures a few additional details that aren’t present in something like python, namely monads and lifetimes. I think in absence of these, this code is a bit easier to read. However, if you had prior exposure to these concepts, I think that this is more approachable. I guess what I’m getting at here is that rust doesn’t seem to be syntactic spaghetti as much as it is a confluence of several lesser-used concepts not typically used in other “simpler” languages.
No, the use cases of Rust are pretty much the same as the use cases of C++. Most Rust code shouldn't have objects with complicated lifetimes, just like most code in any language should avoid objects with complicated lifetimes.
Python users don’t even believe in enabling cursory type checking, their language design is surpassed even by JavaScript, should it really even be mentioned in a language comparison? It is a tool for ML, nothing else in that language is good or worthwhile
”[One] major contributor to APT suggested it would be better to remove the Rust code entirely as it is only needed by Canonical for its Launchpad platform. If it were taken out of the main APT code base, then it would not matter whether they were written in Rust, Python, or another language, since the tools are not directly necessary [for regular installations].”
Given the abundance of the hundreds of deb-* and dh-* tools across different packages, it is surprising that apt isn’t more actively split into separate, independent tools. Or maybe it is, but they are all in a monorepo, and the debate is about how if one niche part of the monorepo uses Rust then the whole suite can only be built on platforms that support Rust?
#!/bin/sh
build_core
if has_rust
then
build_launchpad_utils
fi
It’s like arguing about the bike shed when everyone takes the bus except for one guy who cycles in every four weeks to clean the windows.
If this could be done it seems like the ideal compromise. Everyone gets what they want.
That said eventually more modern languages will be dependencies of the tools one way or another (and they should). So probably Debian as a whole should come to a consensus on how that should happen, so it can happen in some sort of standard and fair fashion.
>tool that can translate memory-safe Rust code into memory-unsafe C code
Fwiw, there're two such ongoing efforts. One[1] being an, written in C++, alternative Rust compiler that emits C (aka, in project's words, high-level assembly), the other[2] being a Rust compiler backend/plugin (as an extra goal to its initial being to compile Rust to CLR asm). Last one apparently is[3] quite modular and could be adapted for other targets too. Other options are continuing/improve GCC front-end for Rust and a recent attempt to make a Rust compiler in C[4] that compiles to QBE IR which can then be compiled with QBE/cc.
That's not what the comment said. It said, "How about a Rust to C converter?..." The idea was that using a converter could eliminate the problem of not having a rust compiler for certain platforms.
FWIW sudo has been maintained by an OpenBSD developer for a while now but got replaced in the base system by doas. Independent of any concerns about Rust versus C, I don't think it's quite as unreasonable as you're claiming to consider alternatives to sudo given that the OS that maintains it felt that it was flawed enough to be worth writing a replacement for from scratch.
sudo had grown a lot of features and a complicated config syntax over the years, which ended up being confusing and rarely needed in practice. doas is a lot simpler. It wasn't just a rewrite of a flawed utility but a simplification of it.
Regardless of the exact terminology used to describe why it was done, my point is that assuming that people are "lunatics" because they want to replace sudo is not a particularly compelling claim, and that's what the comment I was responding to had said.
My point is against rewrites of critical software for the point of rewriting it *insert my favorite language*. Zig is also a safer language than C, so are many other alternatives, yet the Zig community is not obsessed in rewriting old software but writing new one. And the Zig compiler has excellent C interop (in fact it can compile C/C++), yet the community is more focused in writing new software.
There are many factors that make software reliable, it's not just a matter of pretty types and memory safety, there's factors like platform/language stability, skill and expertise of the authors, development speed and feedback.
Cue for all those battle tested programs that people keep finding vulnerabilities several decades after they got considered "done". You should try looking at the test results once in a while.
And by the way, we had to replace almost all of the basic Unix tools at the turn of the century because they were completely unfit for purpose. There aren't many left.
Converting parsers to Rust is not "pointless". Doing string manipulation in C is both an awful experience and also extremely fertile ground for serious issues.
It’s very easy to write a string library in C which makes string operations high level (both in API and memory management). Sure, you shouldn’t HAVE to do this. I get it. But anyone writing a parser is definitely skilled enough to maintain a couple hundred lines of code for a linear allocator and a pointer plus length string. And to be frank, doing things like “string operations but cheaply allocated” is something you have to do ANYWAY if you’re writing e.g. a parser.
I think it is more a matter of convenience. There are countless string implementations for C, some tiny projects, others part of larger frameworks like Glib. At the end of the day a C developer has to decide if they are going to pull in half of gnome to handle a few lines of IO or if they are just going to use the functions the C standard conveniently ships with. Most people are going to do the later.
Sure, which is highly valuable information that hopefully made its way into a testing / verification suite. Which can then be used to rewrite the tool into a memory-safe language, which allows a lot of fixes and edge cases that were added over time to deal with said issues to be refactored out.
Of course there's a risk that new issues are introduced, but again, that depends a lot on the verification suite for the existing tool.
Also, just because someone did a port, doesn't mean it has to be adopted or that it should replace the original. That's open source / the UNIX mentality.
Calling it pointless comes across as jaded. It's not pointless.
Supporting Rust attracts contributors, and those contributors are much less likely to introduce vulnerabilities in Rust when contributing vs alternatives.
The author of the rust software did not solve the platform problem, as a result it is not a solution. Since it is not a solution, it should be reverted. It's really that simple.
All compilers do anyways is translate from one language specification to another. There's nothing magical about Rust or any specific architecture target. The compiler of a "memory safe" language like Rust could easily output assembly with severe issues in the presence of a compiler bug. There's no difference between compiling to assembly vs. C in that regard.
For one, signed integer overflow is allowed and well-defined in Rust (the result simply wraps around in release builds), while it's Undefined Behavior in C. This means that the LLVM IR emitted by the Rust compiler for signed integer arithmetic can't be directly translated into the analogous C code, because that would change the semantics of the program. There are ways around this and other issues, but they aren't necessarily simple, efficient, and portable all at once.
You guys seem to be assuming transpiling to C means it must produce C that DTRT on any random C compiler invoked any which way on the other side, where UB is some huge possibility space.
There's nothing preventing it from being some specific invocation of a narrow set of compilers like gcc-only of some specific version range with a set of flags configuring the UB to match what's required. UB doesn't mean non-deterministic, it's simply undefined by the standard and generally defined by the implementation (and often something you can influence w/cli flags).
> You guys seem to be assuming transpiling to C means it must produce C that DTRT on any random C compiler invoked any which way on the other side, where UB is some huge possibility space.
Yes, that's exactly what "translating to C" means – as opposed to "translating to the very specific C-dialect spoken by gcc 10.9.3 with patches X, Y, and Z, running on an AMD Zen 4 under Debian 12.1 with glibc 2.38, invoked with flags -O0 -g1 -no-X -with-Y -foo -blah -blub...", and may the gods have mercy if you change any of this!
The gigantic difference is that assembly language has extremely simple semantics, while C has very complex semantics. Similarly, assembler output is quite predictable, while C compilers are anything but. So the level of match between the Rust code and the machine code you'll get from a Rust-to-assembly compiler will be much, much easier to understand than the match you'll get between the Rust code and the machine code produced by a C compiler compiling C code output by a Rust-to-C transpiler.
Rust developers are so dogmatic about their way being the best and only way that I just avoid it altogether. I've had people ask about Rust in issues/discussions in small hobby projects I released as open source - I just ban them immediately because there is no reasoning with them and they never give up. Open source terrorists.
"Open source terrorism" is a hilarious designation for Rust-like traditions and customs. I wonder what other programming language/software communities may fall under this definition?
Shouldn't we wait until Rust gets full support in GCC? This should resolve the issue with ports without a working Rust compiler.
I don't have a problem with Rust, it is just a language, but it doesn't seem to play along well with the mostly C/C++ based UNIX ecosystem, particularly when it comes to dependencies and package management. C and C++ don't have one, and often rely on system-wide dynamic libraries, while Rust has cargo, which promotes large dependency graphs of small libraries, and static linking.
I have never seen a program segfault and crash more than apt. The status quo is extremely bad, and it desperately needs to be revamped in some way. Targeted rewrites in a memory safe & less mistake-prone language sounds like a great way to do that.
If you think this is a random decision caused by hype, cargo culting, or a maintainer's/canonical's mindless whims... please, have a tour through the apt codebase some day. It is a ticking time bomb, way more than you ever imagined such an important project would be.
I've been using apt regularly on Debian for a long time and never seen it crash or segfault. Very strange that you do. All software has bugs of course, but apt is so heavily used that I expect it gets attention. It just works for me.
You know, it is easy to find this kind of nitpicking and seemingly eternal discussion over details exhausting and meaningless, but I do think it is actually a good sign and a consequence of "openness".
In politics, authoritarianism tend to show a pretty façade where everyone mostly agrees (the reality be damned), and discussion and dissenting voice are only allowed to a certain extent as a communication tool. This is usually what we see in corporate development.
Free software are much more like democracy, everyone can voice their opinion freely, and it tends to be messy, confrontational, nitpicky. It does often lead to slowing down changes, but it also avoids the common pitfall of authoritarian regime of going head first into a wall at the speed of light.
Opensource software doesn't have 1 governance model and most of it starts out as basically a pure authoritarian run.
It's only as the software ages, grows, and becomes more integral that it switches to more democratic forms of maintenance.
Even then, the most important OS code on the planet, the kernel, is basically a monarchy with King Linus holding absolute authority to veto the decision of any of the Lords. Most stuff is maintained by the Lords but if Linus says "no" or "yes" then there's no parliament which can override his decision (beyond forking the kernel).
""and not be held back by trying to shoehorn modern software on retro computing devices""
Nice. So discrimination of poor users who are running "retro" machines because that is the best they can afford or acquire.
I knew of at least two devs who are stuck with older 32 bit machines as that is what they can afford/obtain. I even offered to ship them a spare laptop with a newer CPU and they said thanks but import duties in their country would be unaffordable. Thankfully they are also tinkering with 9front which has little to no issues with portability and still supports 32 bit.
Looking at the list of affected architectures: Alpha (alpha), Motorola 680x0 (m68k), PA-RISC (hppa), and SuperH (sh4) I think these are much much more likely to be run by enthusiasts than someone needing an affordable computer.
The last 32bit laptop CPU was produced nearly 20 years ago.
Further, there are still several LTS linux distros (including the likes of Ubuntu and Debian) which don't have the rust requirement and won't until the next LTS. 24.04 is supported until 2029. Meaning you are talking about a 25 year old CPU at that point.
And even if you continue to need support. Debian based distros aren't the only ones on the plant. You can pick something else if it really matters.
> The last 32bit laptop CPU was produced nearly 20 years ago.
15 years max; I can easily find documentation of Intel shipping Atom chips without 64-bit support in 2010, though I haven't found a good citation for when exactly that ended.
We're basically at a point where running those older machines is more expensive, once you factor in power use.
Even then, people using ancient fifth-hand machines are almost certainly still going to run x86 - which means they'll have no trouble running Rust as 32-bit x86 is a supported target. Their bigger issue is going to be plain old C apps dropping 32-bit support!
"Retro" in this case genuinely means "horribly outdated". We're talking about systems with CPUs in the hundreds of MHz with probably fewer than a gigabyte in memory. You might do some basic word processing using Windows 95, but running anything even remotely resembling a modern OS is completely impossible. And considering their age and rarity, I'd be very impressed if anyone in a poor country managed to get their hands on it.
Agree, using these architectures isnt related to one's finances and unaffordability of hardware. Using obscure hardware like this for hobbyist reasons is a privilege, and one that rarely demands the latest upstream for everything at that.
Are you trying to suggest there is a nontrivial community of people who cannot afford modern 64-bit Linux platforms, and opt for 9front on some ancient 32-bit hardware instead? Where are they coming from? Don't get me wrong, I love the 9 as much as the next guy, but you seem to paint it as some kind of affordability frontier...
One is in lives in Brazil and I think the other lives in the Middle East. They both have old second hand 32 bit laptops from the 00's.
> but you seem to paint it as some kind of affordability frontier...
Yes because there are people still using old hardware because they have no choice. Also, whats the problem with supporting old architectures? Plan 9 solved the portability problem and a prominent user recently ported it to cheap MIPS routers so we can run Plan 9 on cheap second hand network hardware. We have the tool chain support so we use it.
And believe me, I understand a raspberry pi or whatever is much faster and uses less power but I would rather we reduce e-waste where possible. I still run old 32 bit systems because they work and I have them.
> whats the problem with supporting old architectures?
It's not free, it's not easy, and it introduces hard to test and rarely run code paths that may or may not have problems on the target architecture.
I think there's a pretty strong argument for running hardware produced in the last 10 years for the next 10 or 20 years. However, I think it should be recognized that there was massive advances in compute power from 2000 to 2010 that didn't happen from 2010 to 2025.
A Core 2 Quad (produced in 2010) has ~ 1/2 the performance of the N150 (1/4 the single core performance of the latest AMD 9950).
Meanwhile a Pentium 3 from 2000 has roughly 1/10th the performance of the same Core 2 Quad.
There are simply far fewer differences between CPUs made in 2010 and today vs CPUs made in 2000 to 2010. Even the instruction set has basically become static at this point. AVX isn't that different from SSE and there's really not a whole bunch of new instructions since the x64 update.
> There are simply far fewer differences between CPUs made in 2010 and today vs CPUs made in 2000 to 2010.
I have stopped replacing machines (and smartphones) because they became outdated: the vast majority of compile tasks is finished in a fraction of a second, applications basically load instantly from SSD, and I never run out of RAM. The main limiting factor in my day-to-day use is network latency - and nothing's going to solve that.
My main machine is a Ryzen 9 3900X with 32GB of RAM and a 1TB SSD. And honestly? It's probably overkill. It's on the replacement list due to physical issues - not because I believe I'll significantly benefit from the performance improvements of a current-gen replacement. I'm hoping it'll last until AM6 comes around!
Every task is either "basically instantly", "finishes in a sip of coffee", or "slow enough for a pee break / email response / lunch break". Computers aren't improving enough to make my tasks upgrade to a faster category, so why bother?
>In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing.
I can understand the importance of safe signature verification, but how is .deb parsing a problem? If you're installing a malicious package you've already lost. There's no need to exploit the parser when the user has already given you permission to modify arbitrary files.
It is possible the deb package is parsed to extract some metadata before being installed and before verifying signature.
Also there is aspect of defence in depth. Maybe you can compromise one package that itself can't do much, but installer runs with higher priviledges and has network access.
Another angle -- installed package may compromise one container, while a bug in apt can compromise the environment which provisions containers.
And then at some point there is "oh..." moment when the holes in different layers align nicely to make four "bad but not exploitable" bugs into a zero day shitshow
> It is possible the deb package is parsed to extract some metadata before being installed and before verifying signature.
Yes, .deb violates the cryptographic doom principle[1] (if you have to perform any cryptographic operation before verifying the message authentication code (or signature) on a message you’ve received, it will somehow inevitably lead to doom).
Their signed package formats (there are two) add extra sections to the `ar` archive for the signature, so they have to parse the archive metadata & extract the contents before validating the signature. This gives attackers a window to try to exploit this parsing & extraction code. Moving this to Rust will make attacks harder, but the root cause is a file format violating the cryptographic doom principle.
Sorry in advance if this is a dumb question, but isn't Rust's 'Cargo' package manager one of the draws of Rust? While I follow along that Rust's memory safety is a big benefit, does not the package manager and the supply chain attacks that come along with it take away from the benefits? For reference, NPM has had no shortage of supply chain security incidents.
How would adding Rust to such core dependencies not introduce new supply chain attack opportunities?
Cargo defaults to downloading from `crates.io` but can easily be configured to get its dependencies elsewhere. That could be an alternative registry run by a Linux distribution or other organization, or even just overriding paths to dependencies to where local copies are stored. I'd expect a distro like Debian to mandate the use of an internal crate registry which mirrors the crates they're choosing to include in the distro with the versions they're choosing. This adds supply chain attack opportunities in the same way that adding any software adds supply chain attack opportunities, the use of `cargo` instead of `curl` to download sources doesn't change anything here.
The parser can run before the user is asked for permission to make changes. The parsed metadata can then discourage the user from installing the package (e.g. because of extremely questionable dependencies).
Dependencies are probably in the apt database and do not need parsing, but not everything is, or perhaps apt can install arbitrary .deb files now?
Preferably one is not able to pwn a package repository by uploading single malicious .deb file to it. e.g. people on Ubuntu frequently use PPAs (private package archives). You can run your own on Launchpad. If you upload malicious package, it should not destroy Launchpad.
As someone fighting the C++ toolchain daily, there is a painful irony in seeing APT—the tool supposed to solve dependency hell—creating its own dependency crisis.
I sympathize with the maintainers of retro hardware. But honestly? Holding back the security and maintainability of a modern OS base layer just so an AlphaStation from 1998 can boot feels backwards.
The transition pain is real, and Canonical handled the communication poorly. But the 'legacy C tax' is eternal. We have to move critical infrastructure off it eventually.
I don't think programs should use mixed languages if its at all avoidable. Linux would be an exception because I think it can benefit from oxidation and it'll be decades before RedoxOS is ready.
my biggest problem with rust is. i can't read it. i never know what this symbol means. is it a keyword? a type, a variable, a constant or a macro? Sure loading it into a IDE with a language server may help understanding the code.
IS funny because thanks to efforts from companies like Valve, Linux seens to finally been receiving the recognition it deserves, Rust evangelists and weirdos who claim morality superiority to the rest of mortals, are going to put everything to loose, because of this obsession of rewriting tools that actually work for many years without bigger issues, just so they can say their language is superior.
IF there was some glaring problems with tools like sudo, sort, apt, and you have a superior version, sure, go ahead. But this is clearly not the case. Sometimes the rust version is just the same, or even inferior, but people are ready to plunge into destruction just to say their distro have the latest and greatest. Its just vanity.
Maybe the conspiracy theories that big tech finance the crazy incopetent people from in position of power in open source projects that they no longer can compete, in order to destroy them from the inside, is not that far fetched.
This is just one reason I'm not the biggest fan of Rust. The language is good (as well as what it solves), but this tendency to force it into everything (even where it would provide no benefit whatsoever) is just mind-boggling to me. And the Rust evangelists then wonder why there are so many anti-rust folk.
I think that's reasonable, but surely there's a limit? Like, if one user exists on an old piece of tech, does Debian need to support them forever?
I think this is a nuanced call, personally, and I think there's some room for disagreements here. I just happen to believe that maybe the right decision is to fork at some point and spin off legacy forks when there's a vanishingly small suite of things that cause friction with progress.
The universalization from one developer's post to all Rust "fanatics" is itself an unwelcome attack. I prefer to keep my discussion as civilized as possible.
I read that more as "here's a perfect example of something I'd noticed already" rather than "wow this is a terrible first impression your group is making".
Perhaps this reading is colored by how this same pair of sentiments seems to come up practically every single time there's a push to change the language for some project.
I think you'll experience some pushback on the assertion that that particular quote has a lot of arrogance or disdain in it.
Building large legacy projects can be difficult and tapping into a thriving ecosystem of packages might be a good thing. But it's also possible to have "shiny object" or "grass is greener" syndrome.
Is it arrogant or a clear and straightforward announcement that a Decision has been made and these are the consequences? I'm not seeing any arrogance in the message myself.
"Arrogant" does not mean "forceful" or "assertive" or "makes me angry".
This is forceful, assertive, and probably makes people angry.
Does the speaker have the authority to make this happen? Because if so, this is just a mandate and it's hard to find some kind of moral failing with a change in development direction communicated clearly.
How is this arrogant? Are open source developers now responsible for ensuring every fork works with the dependencies and changes they make?
This seems like a long window, given to ports to say, "we are making changes that may impact you, heads up." The options presented are, frankly, the two primary options "add the dependency or tell people you are no longer a current port".
> I think you'll experience some pushback on the assertion that that particular quote has a lot of arrogance or disdain in it.
It's just a roundabout way of saying "anything that isn't running Rust isn't a REAL computer". Which is pretty clearly an arrogant statement, I don't see any other way of interpreting it.
Be real for a second. People are arguing against Rust because it supports fewer target architectures than GCC. Which of the target architectures do you believe if important enough that it should decide the future development of apt?
I won't be real for a second, because this isn't about that.
Arguing that support for certain architectures should be removed because they see very little real world use is totally valid. But it's possible to do so in a respectful way, without displaying such utter contept for anyone who might disagree.
I read it as a straightforward way of saying "support for a few mostly unused architectures is all that is holding us back from adopting rust, and adopting rust is viewed as a good thing"
from the outside it looks like a defense mechanism from a group of developers who have been suffering crusades against them ever since a very prolific c developer decided rust would be a good fit for this rather successful project he created in his youth.
Maybe they wouldn't experience so much pushback if they were more humble, had more respect for established software and practices, and were more open to discussion.
You can't go around screaming "your code SUCKS and you need to rewrite it my way NOW" at everyone all the time and expect people to not react negatively.
> You can't go around screaming "your code SUCKS and you need to rewrite it my way NOW"
It seems you are imagining things and hate people for the things you imagined.
In reality there are situations where during technical discussions some people stand up and with trembling voice start derailing these technical discussions with "arguments" like "you are trying to convince everyone to switch over to the religion".
https://youtu.be/WiPp9YEBV0Q?t=1529
I disagree very strongly that a suggestion to change something is also a personal attack on the author of the original code. That’s not a professional or constructive attitude.
Are you serious? It's basically impossible to discuss C/C++ anymore without someone bringing up Rust.
If you search for HN posts with C++ in the title from the last year, the top post is about how C++ sucks and Rust is better. The fourth result is a post titled "C++ is an absolute blast" and the comments contain 128 (one hundred and twenty eight) mentions of the word "Rust". It's ridiculous.
Lots of current and former C++ developers are excited about Rust, so it’s natural that it comes up in similar conversations. But bringing up Rust in any conversation still does not amount to a personal attack, and I would encourage some reflection here if that is your first reaction.
To be clear, the "you" and "my" in your sentence refer to the same person. Julian appears to be the APT maintainer, so there's no compulsion except what he applies to himself.
(Maybe you mean this in some general sense, but the actual situation at hand doesn't remotely resemble a hostile unaffiliated demand against a project.)
No, honestly Rust has just really crappy attitude and culture. Even as a person who should naturally like Rust and I do plan to learn it despite that I find these people really grating.
As evidenced by this very comment chain. I've seen, by far, way more comment from people annoyed by vegans. I can't even remember the last time I've heard a vegan discuss it outside of just stating the food preference when we got out to eat.
As a vegetarian on ethical grounds (mostly due to factory farming of meat) I politely disagree with your assessment.
I have to decline and explain in social settings all the time, because I will not eat meat served to me. But I do not need to preach when I observe others eating meat. I, like all humans, have a finite amount of time and energy. I'd rather spend that time focused on where I think it will do the greatest good. And that's rarely explaining why factory farming of meat is truly evil.
The best time is when someone asks, "why don't you eat meat?" Then you can have a conversation. Otherwise I've found it best to just quietly and politely decline, as more often than not one can be accommodated easily. (Very occasionally, though, someone feels it necessary to try and score imaginary points on you because they have some axe to grind against vegetarians and vegans. I've found it best to let them burn themselves out and move on. Life's too short to worry about them.)
it's not just a dietary choice and it's a personal lifestyle in the sense of it being your choice, but not in the sense of a lifestyle which is limited to your private space.
You think it's wrong abusing animals. Why would you relate that only to you and think it would be ok for others to abuse them? You wouldn't
Frankly, I more often see meat eaters get defensive. We got to a restaurant, the vegan guy gets a meatless meal. The vegan guy gets bombarded with "Oh, you don't eat meat?" "Why?" "What's wrong with eating meat?" "I just like having a steak now and then."
at least the questions about it breaking unofficial distros, mostly related to some long term discontinued architectures, should never affect how a Distro focused on current desktop and server usage develops.
if you have worries/problems outside of unsupported things breaking then it should be obvious that you can discuss them, that is what the mailing list is for, that is why you announce intend beforehand instead of putting things in the change log
> complained that Klode's wording was unpleasant and that the approach was confrontational
its mostly just very direct communication, in a professional setting that is preferable IMHO, I have seen too much time wasted due to misunderstandings due to people not saying things directly out of fear to offend someone
through he still could have done better
> also questioned the claim that Rust was necessary to achieve the stronger approach to unit testing that Klode mentioned:
given the focus on Sequoia in the mail, my interpretation was that this is less about writing unit tests, and more about using some AFIK very well tested dependencies, but even when it comes to writing code out of experience the ease with which you can write tests hugely affects how much it's done, rust makes it very easy and convenient to unit test everything all the time. That is if we speak about unit tests, other tests are still nice but not quite at the same level of convenience.
> "currently has problems with rebuilding packages of types that systematically use static linking"
that seems like a _huge_ issue even outside of rust, no reliable Linux distros should have problems with reliable rebuilding things after security fixes, no matter how it's linked
if I where to guess there this might be related to how the lower levels of dependency management on Linux is quite a mess due to requirements from 90 no longer relevant today, but which some people still obsess over.
To elaborate (sorry for the wall of text) you can _roughly_ fit all dependencies of a application (app) into 3 categories:
1. programs the system provides (opt.) called by the app (e.g. over ipc, or spawning a sub process), communicating over well defined non language specific protocols. E.g. most cmd-line tools, or you systems file picker/explorer should be invoked like that (that it often isn't is a huge annoyance).
2. programs the system needs to provide, called using a programming language ABI (Application Binary Interface, i.e. mostly C ABI, can have platform dependent layout/encoding)
3. code reused to not rewrite everything all the time, e.g. hash maps, algorithms etc.
The messy part in Linux is that for historic reasons the later two parts where not treated differently even through they have _very_ different properties wrt. the software live cycle. For the last category they are for your code and specific use case only! The supported versions usable with your program are often far more limited: Breaking changes far more normal; LTO is often desirable or even needed; Other programs needing different incompatible versions is the norm; Even versions with security vulnerabilities can be fine _iff_ the vulnerabilities are on code paths not used by your application; etc. The fact that Linux has a long history of treating them the same is IMHO a huge fuck up.
It made sense in the 90th. It doesn't anymore since ~20 years.
It's just completely in conflict with how software development works in practice and this has put a huge amount of strain on OSS maintainers, due to stuff like distros shipping incompatible versions potentially by (even incorrectly) patching your code.... and end users blaming you for it.
IMHO Linux should have a way to handle such application specific dependencies in a all cases from scripting dependencies (e.g. python), over shared object to static linking (which doesn't need any special handling outside of the build tooling).
People have estimated the storage size difference of linking everything statically, and AFIK it's irrelevant in relation to availability and pricing on modern systems.
And the argument that you might want to use a patched version of a dependency "for security" reasons fails if we consider that this has lead to security incidents more then one time. Most software isn't developed to support this at all and bugs can be subtle and bad to a point of a RCE.
And yes there are special cases, and gray areas in between this categories.
E.g. dependencies in the 3rd category you want to be able to update independently, or dependencies from the 2nd which are often handled like the 3rd for various piratical reasons etc.
Anyway coming back the the article Rust can handle dynamic linking just fine, but only for C ABI as of now. And while rust might get some form of RustABI to make dynamic linking better it will _never_ handle it for arbitrary libraries, as that is neither desirable nor technical possible.
---
EDIT: Just for context, in case of C you also have to rebuild all header only libraries using pre-processor macros, not doing so is risky as you now mix different versions of the same software in one build. Same (somewhat) for C++ with anything using template libraries. The way you can speed it up is by caching intermediate build artifacts, that works for rust, too.
I hate learning new things. It sucks. Also, I hate things that make my knowledge of C++ obsolete. I hate all the people that are getting good at rust and are threatening to take away my job. I hate that rust is a great leveler, making all my esoteric knowledge of C++ that I have been able to lord over others irrelevant. I hate that other people are allowed to do this to me and to do whatever they want, like making the decision to use rust in apt. It’s just sad and crazy to me. I can’t believe it. There are lots of people like me who are scared and angry and we should be able to control anyone else who makes us feel this way. Wow, I’m upset. I hope there is another negative post about rust I can upvote soon.
Overall, I think Rust is probably too dangerous to introduce into core software. Every time there is a donation to the Rust Foundation, the Rust community is in an uproar that it is not a large enough fraction of gross revenue. Linux, apt, are all currently both free as in speech and free as in beer. If we have to start donating to the Rust Foundation a percentage of gross revenue for every tool that we use written in Rust, it will cost a lot. Probably much better to just not put Rust in the kernel or in apt.
It's a Trojan horse language. There are no demands from C users that anyone donate to C non-profits. Much better, safer language to use from an ecosystem perspective.
Sure, it's on Reddit /r/rust. I'll provide links at the end, but it happens every time there is a donation.
> > > Multi trillion dollar conglomerate invests a minuscule fraction of a fraction of their monthly revenue into the nonprofit foundation that maintains the tool that will save them billions
> > So they make around 220k per minute, 350k in under 2 minutes. Still, any amount is better than nothing at all.
> I genuinely hate the thought of "Better than nothing". We should be saying "go big or go home."
Pretty popular thread. Approximate 900 upvotes over those three comments.
> That's like a millisecond of Google revenue.
and separately
> I mean thats nice. But in all honesty, is 1 Million still a lot?
That's from a while ago so it's smaller. But you can tell the sentiment is rising because these expressions of "it's not enough" are becoming more popular. Just a matter of time before the community tries to strong-arm other orgs with boycotts and this and that. We've seen this before.
> > Cool, but also depressing how relatively small these “huge” investments in core technologies actually are.
> Yeah, seriously. This is comparatively like me leaving a penny in the “take a penny leave a penny” plate at the gas station.
and separately
> Ah yes, the confidence displayed by allocating 0.0004% of your yearly revenue.
> Satya alone earns that in a 40 hr work week.
It's a pretty old playbook to use free-software language to get one's technology entrenched, then there's murmurs about how not enough money is being sent back to the people making it, the organization then uses the community as the stalking horse to promote this theory, and then finally comes the Elastic License relicensing.
Elastic did it. MongoDB did it. Hashicorp did it. Redis did it. I get the idea, but we should pre-empt the trojan horse when we see it coming. I know I know. You can fork when it happens etc. but I'm not looking forward to switching my toolchain to "Patina" or whatever they call the fork.
And if you think I'm some guy with an axe to grind, I have receipts for Rustlang enthusiasm:
I have a dual pentium pro 200 that runs gentoo and openbsd, but rust doesn't ship i586 binaries, only i686+. So I would need to compile on a separate computer to use any software that is using rust.
There is already an initrd package tool I can't use since it is rust based, but I don't use initrd on that machine so it is not a problem so far.
The computer runs modern linux just fine, I just wish the rust team would at least release an "i386" boostrap binary that actually works on all i386 like all of the other compilers.
"We don't care about retro computers" is not a good argument imho, especially when there is an easy fix. It was the same when the Xorg project patched out support for RAMDAC and obsoleted a bunch of drivers instead of fixing it easily. I had to fix the S3 driver myself to be able to use my S3 trio 64v+ with a new Xorg server.
This sounds like it's fun. However, I have to ask, why should the linux world cater to supporting 30 year old systems? Just because it scratches an itch?
You can grab a $150 NUC which will run circles around this dual pentium pro system while also using a faction of the power.
You obviously have to do a lot of extra work, including having a second system, just to keep this old system running. More work than it'd take to migrate to a new CPU.
I grew up without money, it makes me laugh when I read comments like this. You can just, yeah when you're fortunate enough to have a strong support system; you can.
My understanding is that the systems are not meaningfully common, and are hobbyist archs. But the idea that dropping support is fine because you can just throw money at it is so incredibly divorced from reality that I actually feel bad for anyone that believes this.
I deeply believe that if you don't like what a maintainer of FOSS code has done, you should fork the project. Admittedly that's a very onerous suggestion. But more important than that, you should help people when you can. If you're deciding to drop support for a bunch of people because it makes your job easier or simpler, when you don't need to. You're the bad guy in the story. That's the way this announcement has been written, and most reasonable people object to that kind of behavior. Selfishness should feel a bit offensive to everyone.
I have plenty of relatives without money or resources and $150 is something they can all afford.
It's not even the floor of the amount of money needed (Here's a used NUC for $30 [1]), but rather just showing that a new system can be had for a lot less than many people expect.
You are the one divorced from reality if you think there's an army of poor orphans running modern linux on pentium pros.
Affording rent and health insurance is a FAR bigger issue than being able to throw a little money towards a new computer once every 10 years.
The system is actually running fine standalone since I have been able to avoid rust software.
As to why it should cater to it, it's more that there is no need to remove something that already works just to remove it.
It is possible to compile rustc on another system so it supports i586 and below. Just a small change in the command line options. And it doesn't degrade the newer systems.
I have plenty of faster machines, I just enjoy not throwing things away or making odd systems work. It's called having fun :)
> it's more that there is no need to remove something that already works just to remove it.
There actually is. Support for old systems isn't free. Mistakes in the past are hard to fix and verify on these old systems. Particularly, the fact that there's not a whole lot of devs with access to dual pentium pro systems to verify changes which would affect such systems.
That means that if there's a break in the kernel or elsewhere that ultimately impacts such system they'll hear from a random retro computing enthusiast which takes time from everyone to resolve and review patches to fix the retro computer.
Time is precious for open source software. It's in limited supply.
I get doing this for fun or the hell of it. But you do need to understand there are costs involved.
Yes, sorry I remembered incorrectly.
The rust compiler claims to be i686 and the CPU is i686 too, but the rust compiler is using Pentium 4 only instructions so it doesn't actually work for i686.
Edit: I see from the sister post that it is actually llvm and not rust, so I'm half barking up the wrong tree. But somehow this is not an issue with gcc and friends.
> "We don't care about retro computers" is not a good argument imho,
It absolutely is. If you want to do the work to support <open source software> for <purpose> you're welcome to do so, but you aren't entitled to have other people do so. There's some narrow exceptions like accessibility support, but retro computing ain't that.
But seeing the maintainer works for Canonical, it seems like the tail (Ubuntu) keeps trying to wag the dog (Debian ecosystem) without much regard for the wider non-Ubuntu community.
I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers", but instead positioned only on the merits of the change.
As an end user, it doesn't concern me too much, but someone choosing to add a new dependency chain to critical software plumbing does, at least slightly, if not done for very good reason.
This was a unilateral decision affecting other's hard work, and the author didn't provide them the opportunity to provide feedback on the change.
It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious.
This is breaking support for multiple ports to rewrite some feature for a tiny security benefit. And doing so on an unacceptably short timeline. Introducing breakage like this is unacceptable.
There's no clear cost-benefit analysis done for this change. Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for, and actually put the cart before the horse.
I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.
Thanks for this.
I know intellectually, that there are sane/pragmatic people who appreciate Rust.
But often the vibe I’ve gotten is the evangelism, the clear “I’ve found a tribe to be part of and it makes me feel special”.
So it helps when the reasonable signal breaks through the noisy minority.
I enjoy rust, but I enjoy not breaking things for users and making lives harder for other devs even more.
For the most part that is almost everyone who works on rust and writes rust. The whole coreutils saga was pretty much entirely caused by Canonical, The coreutils rewrite project was originally a hobby project iirc and NOT ready for prod.
for the most part the coreutils rewrite is going well all things considered, bugs are fixed quickly and performance will probably exceed the original implementation in some cases since concurrency is a cake-walk.
The whole re-write it in rust largely stemmed from the idea that if you have a program in C and a program in Rust then the program in rust is "automatically" better which is often the case. The exception is very large battle tested projects with custom tooling in place to ensure the issues that make C/C++ a nightmare are somewhat reduced. Rust ships with the borrow checker by default meaning logically its like for like.
In the real world it is not always the case there are still plenty of opportunity for straight up logic bugs and crashes (See cloudflare saga) that are completely just due to bad programming practices.
Rust is the nail and the hammer, but you can still hit your finger if you don't know how to swing it properly
FYI for the purpose of disclosing bias I am one of the few "rust first" developers. I learned the language in 2021, and it was the first "real" programming language I learned how to use effectively. Any attempts I have had to dive into other languages have been short lived and incredibly frustrating because rust is a first-class experience in how to make a systems programming language
Very few people are forcing “rewrite in rust” down anyone’s throats. Sometimes it’s the maintainers themselves who are trying to be forward-thinking and undertake a rewrite (e.g., fish shell), sometimes people are taking existing projects and porting them just to scratch an itch and it’s others’ decisions to start shipping it (e.g., coreutils). I genuinely fail to see the problem with either approach.
C’s long reign is coming to an end. Some projects and tools are going to want to be ahead of the curve, some projects are going to be behind the curve. There is no perfect rate at which this happens, but “it’s battle-tested” is not a reason to keep a project on C indefinitely. If you don’t think {pet project you care about} should be in C in 50 years, there will be a moment where people rewrite it. It will be immature and not as feature-complete right out the gate. There will be new bugs. Maybe it happens today, maybe it’s 40 years from now. But the “it’s battle tested, what’s the rush” argument can and will be used reflexively against both of those timelines.
It is basic knowledge that memory safety bugs are a significant source of vulnerabilities, and by now it well-established that the first developer who can avoid C without introducing memory safety bugs hasn't been born yet. In other words: if you care about security at all, continuing with the status quo isn't an option.
The C ecosystem has tried to solve the problem with a variety of additional tooling. This has helped a bit, but didn't solve the underlying problem. The C community has demonstrated that it is both unwilling and unable to evolve C into a memory-safe language. This means that writing additional C code is a Really Bad Idea.
Software has to be maintained. Decade-old battle-tested codebases aren't static: they will inevitably require changes, and making changes means writing additional code. This means that your battle-tested C codebase will inevitably see changes, which means it will inevitably see the introduction of new memory safety bugs.
Google's position is that we should simply stop writing new code in C: you avoid the high cost and real risk of a rewrite, and you also stop the neverending flow memory safety bugs. This approach works well for large and modular projects, but doing the same in coreutils is a completely different story.
Replacing battle-tested code with fresh code has genuine risks, there's no way around that. The real question is: are we willing to accept those short-term risks for long-term benefits?
And mind you, none of this is Rust-specific. If your application doesn't need the benefits of C, rewriting it in Python or Typescript or C# might make even more sense than rewriting it in Rust. The main argument isn't "Rust is good", but "C is terrible".
People so often get hung up on Rust's memory safety features, and dismiss is as through that's all it brings to the table. Far from it! Even if Rust were unsafe by default, I'd still rather use it that, say, C or C++ to develop large, robust apps because it has a long list of features that make it easy to write correct code, and really freaking challenging to write blatantly incorrect code.
Frankly, I envy you, except that I don't envy what it's going to be like when you have to hack on a non-Rust code base that lacks a lot of these features. "What do you mean, int overflow. Those are both constants! How come it didn't let me know I couldn't add them together?"
Rust is the first language for a long time with a chance at improving this situation. A lot of the pushback against evangelism is from people who simply want to keep the status quo, because it's what they know. They have no concept of the systemic consequences.
I'd rather see over-the-top evangelism than the lack of it, because the latter implies that things aren't going to change very fast.
No new technology should be an excuse to engage in unprofessional conduct.
When you propose changes to software, you listen to feedback, provide analysis of the benefits and detriments, and make an informed decision.
Rust isn't special, and isn't a pass to cause endless heartache for end users and developers because your code is in a "safer" language.
New rust code should be held to the same standards as new C and C++ code that causes breakage.
Evangelism isn't useful here, let the tool speak for itself.
It's very easy to justify for yourself why you aren't addressing the hard problems in your codebase. Combine that with a captive audience, and you end up with everyone running the same steaming heap of technical debt and being unhappy about it.
But the second an alternative starts to get off the ground there's suddenly a reason to address those big issues: people are leaving, and it is clear that complacency is no longer an option. Either evolve, or accept that you'll perish.
Posts that say "I rewrote X in Rust!" shouldn't actually be controversial. Every time you see one, you should think to yourself wow, the software world is moving towards being more stable and reliable, that's great!
I also doubt Rust brings as many advantages in terms of stability that people claim. The C code I rely on in my daily work basically never fails (e.g. I can't remember "vim" ever crashing on me in the last 30 years I use it). That this is all rotten code C that needs to be written is just nonsense. IMHO it would far more useful to invest in proper maintenance and incremental improvements.
This is an interesting read on software projects and failure: https://spectrum.ieee.org/it-management-software-failures
"Nvidia Security Team: “What if we just stopped using C?”, 170 comments (2022), https://news.ycombinator.com/item?id=42998383
This is also not an endorsement of C/C++.
This is the kind of UNIX stuff that we would even write in Perl or Tcl back in the day.
The problem is that those ports aren't supported and see basically zero use. Without continuous maintainer effort to keep software running on those platforms, subtle platform-specific bugs will creep in. Sometimes it's the application's fault, but just as often the blame will lie with the port itself.
The side-effect of ports being unsupported is that build failures or test failures - if they are even run at all - aren't considered blockers. Eventually their failure becomes normal, so their status will just be disregarded as noise: you can't rely on them to pass when your PR is bug-free, so you can't rely on their failure to indicate a genuine issue.
This will be an impediment for new architectures in the future. Instead of just "builds with gcc" we would need to wait for Rust support.
There's always rustc_codegen_gcc (gcc backend for rustc) and gccrs (Rust frontend for gcc). They are't quite production-ready yet, but there's a decent chance it's good enough for the handful of hobbyists wanting to run the latest applications on historical hardware.
As to adding new architectures: it just shifts the task from "write gcc backend" to "write llvm backend". I doubt it'll make much of a difference in practice.
For what it's worth, the zero->one introduction of a new language into a big codebase always comes with a lot of build changes, downstream impact, debate, etc. It's good for that first feature to be some relatively trivial thing, so that it doesn't make the changes any bigger than they have to be, and so that it can be delayed or reverted as needed without causing extra trouble. Once everything lands, then you can add whatever bigger features you like without disrupting things.
No comment on the rest of the thread...
I think the best move would have been to announce deprecation of those ports separately. As it was announced, people who will never be impacted by their deprecation are upset because the deprecation was tied to something else (Rust) that is a hot topic.
If the deprecation of those ports was announced separately I doubt it would have even been news. Instead we’ve got this situation where people are angry that Rust took something away from someone.
EDIT: okay so I was slightly too strong: some of them were official as of 2011, but haven't been since then. The main point that this isn't deprecating any supported ports is still accurate.
It’s the way the two actions were linked that caused the controversy.
Imo this is true for going from one to a handful, but less true when going from a handful to more. Afaict there are 6 official ports and 12 unofficial ports (from https://www.debian.org/ports/).
For memory consistency, Alpha historically had value as the weakest and most likely to expose bugs. But nobody really wants to implement hardware like that anymore, almost everything falls somewhere on the spectrum of behavior bounded by x86 (strict) and Arm (weaker), and newer languages (eg. C++ 11) mean newer code can be explicit about its expectations rather than ambiguous or implicit.
At most if a rewrite would happen, it makes much more sense in a compiled language with automatic resource management.
Take for example git: do you fully trust the content of every repository you clone? Sure, you'll of course compile and run it in a container, but how prepared are you for the possibility of the clone process itself resulting in arbitrary code execution?
The same applies to the other side of the git interaction: if you're hosting a git forge, it is basically a certainty that whatever application you use will call out to git behind the scenes. Your git forge is connected to the internet, so anyone can send data to it, so git will be processing attacker-controlled data.
There are dozens of similar scenarios involving tools like ffmpeg, gzip, wget, or imagemagick. The main power of command line utilities is their composability: you can't assume it'll only ever be used in isolation with trusted data!
Any memory safe compiled managed language will do.
Some people might complain about the startup cost of a language like Java, though: there are plenty of scripts around which are calling command-line utilities in a very tight loop. Not every memory-safe language is suitable for every command-line utility.
Your "older than 5 years" requirement isn't really fair, is it? Rust itself had its first stable release barely 10 years ago, and mainstream adoption has only started happening in the last 5 years. You'll have trouble finding any "real-world" Rust apps older than 5 years!
As to your actual question: The users of Ferrocene[0] would be a good start. It's Rust but certified for ISO 26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 - clearly someone is interested in writing mission-critical software in Rust!
[0]: https://ferrocene.dev/
this is wrong, the author wrote a mail about _intended_ changes _1/2 year_ before shipping them on the right Debian mailing list. That is _exactly_ how giving people an opportunity to give feedback before doing a change works...
Sure, they made it clear they don't want any discussions to be side tracked about a topics about thing Debian doesn't official support. That is not nice, but understandable, I have seen way too much time wasted on discussions being derailed.
The only problem here is people overthinking things and/or having issues with very direct language IMHO.
> This is breaking support for multiple ports to rewrite some feature for a tiny security benefit
It's not braking anything supported.
The only thing breaking are unsupported. And are only niche used too.
Nearly all projects have very limited capacities and have to draw boundaries, and the most basic boundary is unsupported means unsupported. This doesn't mean you don't keep unsupported use cases in mind/avoid accidentally breaking them, but it means they don't majorly influence your decision.
> And doing so on an unacceptably short timeline
1/2 a year for a change which only breaks unsupported things isn't "unacceptably short", it's actually pretty long. If this weren't OSS you could be happy about one month and most likely less. People complain about how little resources OSS projects have, but the scary truth is most commercial projects have even less resource and must ship at a dead line. Hence why it's very common for them to be far worse when it comes to code quality, technical dept, not correctly handled niche error cases etc.
> to every architecture they release for
Rust toolchain has support for every architecture _they_ release for, it breaks architectures niche unofficial 3rd party projects support. Which is sad, sure, but unsupported is in the end unsupported.
> cost-benefit analysis done for this change.
Who says it wasn't done at all. People have done so over and over on the internet for all kind of Linux distributions. But either way, you wouldn't include that in a mail announcing an intend for change (as you don't want discussions to be side tracked). Also benefits are pretty clear:
- using Sequoia for PGP seems to be the main driving force behind this decision, this projects exists because of repeating running into issues (including security issues) with the existing PGP tooling. It happens to use rust, but if there where no rust it still would exist. Just using a different language.
- some file format parsing is in a pretty bad state to a point you most likely will rewrite it to fix it/make it robust. When anyway doing so it using rust if preferable.
- and long term: Due to the clear, proven(1), benefits of using rust for _new_ project/code increasingly more use it, by not "allowing" rust to be required Debian bars itself form using any such project (like e.g. Sequoia which seems to be the main driver behind this change)
> this "rewrite it in rust" evangilism
which isn't part of this discussion at all,
the main driving part seems to be to use Sequoia, not because Sequoia is in rust but because Sequoia is very well made and well tested.
Similar Sequoia isn't a "lets re-write everything in rust project" but a "state of PGP tooling" is so painful for certain use cases (not all) in ways you can't fix by trying to contribute upstream that some people needed a new tooling, and rust happened to be the choice for implementing that.
They already have a Rust toolchain for every system Debian releases for.
The only architectures they're arguing about are non-official Debian ports for "Alpha (alpha), Motorola 680x0 (m68k), PA-RISC (hppa), and SuperH (sh4)", two of which are so obscure I've never even heard of them and one of the others most famous for powering retro video game systems like Sega Genesis.
This right here.
As a side-note, I was reading one of Cloudflare's docs on how it implemented its firewall rules, and it's so utterly disappointing how the document stops being informative suddenly start to reads like a parody of the whole cargo cult around Rust. Rust this, Rust that, and I was there trying to read up on how Cloudflare actually supports firewall rules. The way they focus on a specific and frankly irrelevant implementation detail conveys the idea things are ran by amateurs that are charmed by a shiny toy.
The wording could have been better, but I don’t see it as a dig. When you look at the platforms that would be left behind they’re really, really old.
It’s unfortunate that it would be the end of the road for them, but holding up progress for everyone to retain support for some very old platforms would be the definition of the tail wagging the dog. Any project that starts holding up progress to retain support for some very old platforms would be making a mistake.
It might have been better to leave out any mention of the old platforms in the Rust announcement and wait for someone to mention it in another post. As it was written, it became an unfortunate focal point of the announcement despite having such a small impact that it shouldn’t be a factor holding up progress.
I get the friction especially for younger contributors, not that this is the case here. However there are architectures that havent even received a revision in their lifetime which old heads will take as personal slights for which heads must roll when presented with even the slightest of inconvenience for their hobbyist port.
It's the idea that's causing the backlash, not the impact.
He created (or at least re-activated) a dichotomy for zero gain, and he vastly increased the expectations for what a Rust rewrite can achieve. That is very, very bad in a software project.
The evidence for both is in your next paragraph. You immediately riff on his dichotomy:
> It’s unfortunate that it would be the end of the road for them, but holding up progress for everyone to retain support for some very old platforms would be the definition of the tail wagging the dog.
(My emphasis.)
He wants to do a rewrite in Rust to replace old, craggy C++ that is so difficult to reason about that there's no chance of attracting new developers to the maintenance team with it. Porting to Rust therefore a) addresses memory safety, b) gives a chance to attract new developers to a core part of Debian, and c) gives the current maintainer a way to eventually leave gracefully in the future. I think he even made some these points here on HN. Anyone who isn't a sociopath sympathizes with these points. More importantly, accidentally introducing some big, ugly bug in Rust apt isn't at odds with these goals. It's almost an expected part of the growing pains of a rewrite plus onboarding new devs.
Compare that to "holding up progress for everyone." Just reading that phrase makes me force sensitive like a Jedi: I can feel the spite of dozens HN'ers tingling at that and other phrases in these HN comments as they sharpen their hatred, ready to pounce at the Rust Evangelists the moment this project hits a snag. (And, like any project, it will hit snags.)
1. "I'm holding on for dear life here, I need help from others and this is the way I plan to get that help"
2. "Don't hold back everyone else's progress, please"
The kind of people who hear "key party" and imagine clothed adults reciting GPG fingerprints need to comprehend that #1 and #2 are a) completely different strings and b) have very different-- let's just say magical-- effects on the behavior of even small groups of humans.
It doesn't concern me neither, but there's some attitude here that makes me uneasy.
This could have been managed better. I see a similar change in the future that could affect me, and there will be precedent. Canonical paying Devs and all, it isn't a great way of influencing a community.
I'm sure some will point out that each example above was just an isolated incident, but I perceive a growing pattern of incidents. There was a time when Debian proudly called itself "The Universal Operating System", but I think that hasn't been true for a while now.
It's frankly the only way to maintain a distribution relying almost completely on volunteer work! The more different options there are, the more expensive (both in terms of human cost, engineering time and hardware cost) testing gets.
It's one thing if you're, say, Red Hat with a serious amount of commercial customers, they can and do pay for conformance testing and all the options. But for a fully FOSS project like Debian, eventually it becomes unmaintainable.
Additionally, the more "liberty" distributions take in how the system is set up, the more work software developers have to put in. Just look at autotools, an abomination that is sadly necessary.
That's kind of the point of modern open source organizations. Let corporations fund the projects, and in exchange they get a say in terms of direction, and hopefully everything works out. The bigger issue with Ubuntu is that they lack vision, and when they ram things through, they give up at the slightest hint of opposition (and waste a tremendous amount of resources and time along the way). For example Mir and Unity were perfectly fine technologies but they retired it because they didn't want to see things through. For such a successful company, it's surprising that there technical direction setting is so unserious.
https://www.reddit.com/r/linux/comments/15brwi0/why_canonica...
Yes, and more generally, as far as I am concerned, the antagonizing tone of the message, which is probably partly responsible for this micro-drama, is typical of some Rust zealots who never miss an occasion to remind C/C++ that they are dinosaurs (in their eyes). When you promote your thing by belittling others, you are doing it wrong.
The conclusion you drew is perfectly reasonable but I’m not sure it is correct, especially when in comparison Canonical is the newcomer. It could even be seen to impugn their integrity.
> David Kalnischkies, who is also a major contributor to APT, suggested that if the goal is to reduce bugs, it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats that Klode mentioned from APT entirely. It is only needed for two tools, apt-ftparchive and apt-extracttemplates, he said, and the only ""serious usage"" of apt-ftparchive was by Klode's employer, Canonical, for its Launchpad software-collaboration platform. If those were taken out of the main APT code base, then it would not matter whether they were written in Rust, Python, or another language, since the tools are not directly necessary for any given port.
https://packages.debian.org/source/sid/apt
I know you can make configure-time decisions based on the architecture and ship a leaner apt-utils on a legacy platform, but it's not as obvious as "oh yeah that thing is fully auxiliary and in a totally different codebase".
> it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats [...] from APT entirely. It is only needed for two tools, apt-ftparchive and apt-extracttemplates [...]
Another interesting, although perhaps tangential, criticism was that the "new solver" currently lacks a testsuite (unit tests; it has integration tests). I'm actually kind of surprised that writing a dependency solver is a greenfield project instead of using an existing one. Or is this just a dig at something that pulls in a well-tested external module for solving?
Posted in curiosity, not knowing much about apt.
It was always second-tier utilities like Aptitude that tried to search for a "solution" to conflicting packaging constraints, but this has always been outside of the core functionality, and if you accepted one of Aptitude's proposed paths, you would do so knowing that the next apt dist-upgrade was almost certainly going to hose everything again.
I think the idea in Apt-world is that it's the responsibility of the archive maintainer to at all times present a consistent index for which the newest versions of everything can coexist happily together. But this obviously breaks down when multiple archives are active on the same apt conf.
To borrow a phrase I recently coined:
If it's not tested then it's not Engineered.
You'd think that core tools would have proper Software Engineering behind them. Alas, it's surprising how many do not.
The ideal implementation of a methodology is only relevant for a small number of management who would do well with almost any methodology because they will take initiative to improve whatever they are doing. The best methodology for wide adoption is the one that works okay for the largest number of management who struggle to take responsibility or initiative.
That is to say, the methodology that requires management to take responsibility in its "lowest energy state" is the best one for most people-- because they will migrate to the lowest energy state. If the "lowest energy state" allows management to do almost nothing, then they will. If the structure allows being clueless, a lot of managers will migrate to pointy haired Dilbert manager cluelessness.
With that said; I do agree with getting products to clients quickly, getting feedback quickly, and being "agile" in adapting to requirements; but having a good plan based on actual knowledge of the requirements is important. Any strict adherence to an extreme methodology is probably going to fail in edge cases, so having the judgement of when to apply which methodology is a characteristic of good management. You've got to know your domain, know your team, and use the right tool for the job.
Mechanical engineers having to work around other component failures all the time because their lead time is gigantic and no matter how much planning they do, failures still pop-up.
The idea that Software Engineering has more bugs is absurd. Electronic engineers, mechanical, electric, all face similar issues to what we face and normally don't have the capacity to deploy fixes as fast as we do because of real world constraints.
Of course we can plan things better, but implementation does inform planning and vice versa and denying that is denying reality.
Integration tests are for overall sanity— do a few happy paths basically work? what about when we make changes to the packaging metadata or roll dependencies forward?
Going unit-test free makes total sense in the case of code that doesn't have much in the way of branching, or where the exceptional cases can just be an uncontrolled exit. Or if you feel confident that your type system's unions are forcing you to cover your bases. Either way, you don't need to test individual functions or modules if running the whole thing end to end gives you reasonable confidence in those.
I didn't say they're not. Integration tests definitely help towards "being tested".
> There are definitely cases for tools where you can largely get by without unit tests in favor of integration tests.
Very strong disagree. I think there are no cases where a strong integration test regime can allow a software project to forego unit tests.
Now, that said, we're probably talking the same thing with different words. I think unit tests with mocks are practically useless. But mocks are the definition of most people's unit tests. Not to me; to me unit tests use real code and real objects. To me, a unit test is what a lot of people call an integration test. And, to me, what I call an integration test, is often what people call system tests or end-to-end tests.
IMO that's on the extreme side too. I've seen a fair share of JUnit monstrosities with 10+ mocks injected "because the project has been written this way so we must continue this madness", but mocking can be done right, it's just overused so much that, well, maybe you're right - it's easier to preach it out than teach how to do it right.
No, because some things that are UB in C are not in Rust, and vice versa, so any codegen has to account for that and will result in additional verbosity that you wouldn't see in "native" code.
Comparison: I often program in Python (and teach it) - and while it has its own syntax warts & frustrations - overall the language has a "pseudocode which compiles" approach, which I appreciate. Similarly, I appreciate what Kotlin has done with Java. Is there a "Kotlin for Rust"? or another high quality system language we ought to be investing in? I genuinely believe that languages ought to start with "newbie friendliness", and would love to hear challenges to that idea.
https://matklad.github.io/2023/01/26/rusts-ugly-syntax.html
I found it reasonably convincing. For what it's worth, I found Rust's syntax quite daunting at first (coming from Python as well), but it only took a few months of continuous use to get used to it. I think "Perl-esque" is an overstatement.
It has some upsides over Python as well, notably that the lack of significant whitespace means inserting a small change and letting the autoformatter deal with syntax changes is quite easy, whereas in Python I occasionally have to faff with indentation before Black/Ruff will let me autoformat.
I appreciate that for teaching, the trade-offs go in the other direction.
This is day one stuff for declaring a dynamic array. What you really want is something like:
However, the grammar is problematic here because of using less-than and greater-than as brackets in a type "context". You can explain that as either not learning from C++'s mistakes or trying to appeal to a C++ audience I guess.Yes, I know there is a `vec!` macro. Will you require your coworkers to declare a similar macro when they start to implement their own generic types?
There are lots of other examples when you get to what traits are required to satisfy generics ("where clauses" vs "bounds"), or the lifetime signature stuff and so on...
You can argue that strong typing has some intrinsic complexity, but it's tougher to defend the multiple ways to do things, and that WAS one of Perl's mantras.
PS. The formatting tooling normalizes your second and third example to the same syntax. Personally I think it ought to normalize both of them to the first syntax as well, but it's not particularly surprising that it doesn't because they aren't things anyone ever writes.
It's really not. Only one of my examples has the equivalent of superfluous parens, and none are dereferencing anything. And I'm not defending C or C++ anyways.
When I was trying to learn Rust (the second time), I wanted to know how to make my own types. As such, the macro `vec!` mentioned elsewhere isn't really relevant. I was using `Vec` to figure things out so I could make a `FingerTree`:
And so on...This kinda implies you've gone wrong somewhere. That doesn't mean there aren't cases where you need type annotations (they certainly exist!) but that if `Vec::new()` doesn't compile because the compiler couldn't deduce the type, it implies something is off with your code.
It's impossible to tell you exactly what the problem was, just that `<Vec<T>>::new()` is not code that you would ever see in a Rust codebase.
1. You don't want the default `i32` integer type and this is just a temporary vector of integers.
2. Rust's type inference is not perfect and sometimes the compiler will object even though there's only one type that could possibly work.
Edit: The <Vec<T>>::new() syntax is definitely never used though.
I mostly agree with your points. The angle brackets were definitely a mistake. The fact there are multiple ways to do it mostly comes down to the fact Rust bet big on type inference of local variables for ergonomics (which I've always been on the fence about myself).
So why do I defend Rust's syntax if I don't especially like it? I basically think we need to look at the bigger picture. C++ parsing is notoriously awful. It is extremely context sensitive, requires semantic information at parse level, it has serious quirks that have caused serious bugs (e.g.: vexing parse, dangling else), and in fact is literally undecidable! In contrast, Rust's grammar is extremely consistent and (almost) totally context-free. It may make choices that are aethestically debatable, it may look noisy if you're not used to it (and maybe this even impacts readability, especially for beginners), but these feel like rather minor concerns.
For me, the _main_ design goal for syntax is to not cause major headaches for either compiler authors or language users, and I would say Rust ticks those boxes. I'm reticent to generalise my own experience, but I suspect that most people that find Rust ugly at first blush would get used to it pretty quickly, and that was really my main point!
The first version I got working was `d`, and my first thought was, "you're kidding me - the right hand side is inferring it's type from the left?!?" I didn't learn about "turbo fish" until some time later.
Tbh d strikes me as the most normal - right hand sides inferring the type from the left exists in basically every typed language. Consider for instance the C code
Doing this inference at a distance is more of a feature of the sml languages (though I think it now exists even in C with `auto`) - but just going from left to right is... normal.Anyways, I only touched SML briefly 30 some years ago, and my reaction to this level of type inference sophistication in Rust went through phases of initial astonishment, quickly embracing it, and eventually being annoyed at it. Just like data flows from expressions calculating values, I like it when the type inference flows in similarly obvious ways.
There is a reason the multiple methods detailed above exist. Mostly for random iterator syntax. Such as summing an array or calling collect on an iterator. Most Rust devs probably don't use all of these syntax in a single year or maybe even their careers.
I don’t think I’ve ever seen the second two syntaxes anywhere.
I really don’t think this is a problem.
In real code, the only form I've ever seen out of these in the wild is your d form.
In many languages you could write:
> if (a + b) > (c + d)
or
> if a + b > c + d
And they're equivalent. Yet nobody complains that there are too many options.
There are people who program with a "fake it till you make it" approach, cutting and pasting from Stack Overflow, and hoping the compiler errors are enough to fix their mess. Historically, these are the ones your pages/books cater to, and the ones who think the borrow checker is the hard part. It doesn't surprise me that you only see code from that kind of beginner and experts on some rust-dev forum and nothing in between.
It's practically your job to defend Rust, so I don't expect you to budge even one inch. However, I hate the idea of letting you mislead the casual reader that this is somehow equivalent and "just how languages work".
The grammar could've used `Generic[Specific]` with square brackets and avoided the need for the turbo fish.
If you're being overly literal, yes, the <>s are needed here for this exact syntax. My point was not about this specific example, it's that these forms are equivalent, but some of them are syntactically simpler than others. The existence of redundant forms does not make the syntax illegitimate, or overly complex.
For this specific issue, if square brackets were used for generics, then something else would have to change for array indexing, and folks would be complaining that Rust doesn't do what every other language does here, which is its own problem.
The compiler knows when the `A` in `A[B]` is a type vs a variable.
Anyway, just to be clear: not liking the turbofish is fine, it's a subjective preference. But it's not an objective win, that's all I'm saying. And it's only one small corner of Rust's syntax, so I don't think that removing it would really alleviate the sorts of broad objections that the original parent was talking about.
But then people would grouse about it using left-bracket and right-bracket as brackets in a type "context".
Once you're writing Rust at full speed, you'll find you won't be putting lifetimes and trait bounds on everything. Some of this becomes implicit, some of it you can just avoid with simpler patterns.
When you write Rust code without lifetimes and trait bounds and nested types, the language looks like Ruby lite.
When you write Rust code with traits or nested types, it looks like Java + Ruby.
When you sprinkle in the lifetimes, it takes on a bit of character of its own.
It honestly isn't hard to read once you use the language a lot. Imagine what Python looks like to a day zero newbie vs. a seasoned python developer.
You can constrain complexity (if you even need it) to certain modules, leaving other code relatively clean. Imagine the Python modules that use all the language features - you've seen them!
One of the best hacks of all: if you're writing HTTP services, you might be able to write nearly 100% of your code without lifetimes at all. Because almost everything happening in request flow is linear and not shared.
And once you learn a few idioms this is mostly the default.
I'm trying to sell Rust to someone who is worried about it. I'm not trying to sound elitist. I want people to try it and like it. It's a useful tool. I want more people to have it. And that's not scaring people away.
Rust isn't as hard or as bad as you think. It just takes time to let it sink in. It's got a little bit of a learning curve, but that pain goes away pretty quick.
Once you've paid that down, Rust is friendly and easy. My biggest gripe with Rust is compile times with Serde and proc macros.
I think this depends a LOT on what you're trying to do and what you need to learn to do it. If you can get by with the std/core types and are happy with various third party crates, then you don't really need to learn the language very deeply.
However, if you want to implement new data structures or generic algorithms, it gets very deep very quickly.
There's also a massive difference between the type of C or Perl someone like me would write, versus someone trying to cope with a more hostile environment or who requires higher levels of performance. My code might be easier to read, but it technically has issue, they are mostly not relevant, while the reverse is true for a more skilled developer, in a different environment. Rust seems to attract really skilled people, who have really defensive code styles or who use more of the provided language features, and that makes to code harder to read, but that would also be the case in e.g. C++.
Well if you come from C++ it's a breath of fresh air! Rust is like a "cleaned-up" C++, that does not carry the historical baggage forced by backwards compatibility. It is well-thought out from the start. The syntax may appear a bit too synthetic; but that's just the first day of use. If you use it for a few days, you'll soon find that it's a great, beautiful language!
The main problem with rust is that the community around it has embraced all the toxic traditions of the js/node ecosystem, and then some. Cargo is a terrifying nightmare. If you could install regular rust dependencies with "apt install" in debian stable, that would be a different story! But no. They want the version churn: continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole.
Concerning TFA, adding rust to apt might be a step in the right direction. But it should be symmetric: apt depends on rust, that's great! But all the rust that it depends on needs to be installed by apt, and by apt alone!
I like that I can just add a dependency and be done instead of having to deal with dependencies which require downloading stuff from the internet and making them discoverable for the project specific tool chain - which works differently on every operating system.
Same goes for compiling other projects.
I wholly blame developers who are too eager to just pull new dependencies in when they could've just written 7 lines themselves.
Something I didn't appreciate for a long time is that, the C/C++ ecosystem does have an npm-like package management ecosystem - it is just implemented at the level of Linux distro maintainers DDD deciding what to package and how. Which worked ok because C was the lingua franca of Unix systems.
But actually it's valuable for programmers to be able to specify their dependencies for their own projects and update them on a schedule unconnected and uncoordinated with the OS's releases. The cargo/npm model is closer to ideal.
Of course what is even better is NixOS-like declarative specification and hashing of all dependencies
IMO, the biggest improvement to C/C++ would be ISO defining a package manager a.la pip or uv or cargo. I'm so tired of writing cmake. just... tired.
Here's my arc through C/C++ build systems:
- make (copy pasted examples)
- RTFM [1]
- recursive make for all sorts of non-build purposes - this is as good as hadoop up to about 16 machines
- autotools
- cmake
- read "recursive make considered harmful" [2]
- make + templates
Anyway, once you've understood [1] and [2], it's pretty hard to justify cmake over make + manual vendoring. If you need windows + linux builds (cmake's most-advertised feature), you'll pretty quickly realize the VS projects it produces are a hot mess, and wonder why you don't just maintain a separate build config for windows.
[1] https://www.gnu.org/software/make/manual/
[2] https://news.ycombinator.com/item?id=20014348
If I was going to try to improve on the state of the art, I'd clean up a few corner cases in make semantics where it misses productions in complicated corner cases (the problems are analogous to prolog vs datalog), and then fix the macro syntax.
If you want a good package manager for C/C++, check out Debian or its derivatives. (I'm serious -- if you're upset about the lack of packages, there's a pretty obvious solution. Now that docker exists, the packages run most places. Support for some sort of AppImage style installer would be nice for use with lesser distros.)
> If I was going to try to improve on the state of the art
The state of the art is buck/bazel/nix/build2.
This creates kind of geographic barriers that segregate populations of C++ users, and just like any language, that isolation begets dialects and idioms that are foreign to anyone from a different group.
But the stewards of the language seem to pretend these barriers don't exist, or at least don't understand them, and go on to make the mountain ranges separating our valleys even steeper.
So it's not that CMake is a self-inflicted wound. It's the natural evolution of a tool to fill in the gaps left under specified by the language developers.
Really? Why? I'm not a Rust guru, but Cargo is the only part of Rust that gave me a great first impression.
> If you could install regular rust dependencies with "apt install" in debian stable, that would be a different story! But no. They want the version churn: continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole.
Except they got the order of type and variable wrong. That alone is enough reason to never use Rust, Go, TypeScript or any other language that botches such a critical cornerstone of language syntax.
I think this is why you don’t like Rust: In Rust you have to be explicit by design. Being explicit adds syntax.
If you appreciate languages where you can write pseudocode and have the details handled automatically for you, then you’re probably not going to enjoy any language that expects you to be explicit about details.
As far as “janky syntax”, that’s a matter of perspective. Every time I deal with Python and do things like “__slots__” it feels like janky layer upon layer of ideas added on top of a language that has evolved to support things it wasn’t originally planned to do, which feels janky to me. All of the things I have to do in order to get a performant Python program feel incredibly janky relative to using a language with first class support for the things I need to do.
Not what they are talking about. Rather better to use words instead of symbols, like python over perl.
Instead of “turbofish” and <‘a>, there could be more key words like mut or dyn. Semicolons and ‘c’har are straight out of the seventies as well. :: not useful and ugly, etc.
Dunders avoid namespace collisions and are not a big problem in practice, all one char, and easy to read. I might remove the trailing part if I had the power.
Python using indenting to convey specific programming meaning feels janky and outdated to people not familiar with Python, but Python familiar programmers don't think twice about it.
It’s well studied that words are easier to read than nested symbols.
But this is not the common case for most programming, which is about detailing business rules. Explicit and verbose (though not excessively) has been shown to be the most readable/maintainable. For example, one character variable names, common in math, are heavily discouraged in professional development.
There’s another level to this as well. To me, calculus notation looks quite elegant, while Perl and (parts of) Rust look like trash. Since they are somewhat similar, the remaining angle is good taste.
Agree about Rust, all the syntax is necessary for what it's trying to do.
Rust did build on the learnings of the past 20 years. Essentially all of its syntax was taken from other languages, even lifetimes.
Rust is trying to solve a really important problem, and so far it might well be one of the best solutions we have for it in a general sense. I 100% support its use in as many places as possible, so that it can evolve. However, its evolution seems to be thwarted by a very vocal subset of its leadership and community who have made it a part of their identity and whatever socio-political leverage toolset they use.
Just for my own curiosity, do you have an examples of suggestions for how to improve the syntax that have been brought up and dismissed by the language maintainers?
I have no way to properly evaluate that statement. My gut says no, because I see people complain about other things far more often, but I do think it's unknowable.
I'm not involved with Rust any more, and I also agree with you that sometimes Rust leadership can be insular and opaque. But the parent isn't really feedback. It's just a complaint. There's nothing actionable to do here. In fact, when I read the parent's post, I said "hm, I'm not that familiar with Kotlin actually, maybe I'll go check it out," loaded up https://kotlinlang.org/docs/basic-syntax.html, and frankly, it looks a lot like Rust.
But even beyond that: it's not reasonably possible to change a language's entire syntax ten years post 1.0. Sure, you can make tweaks, but turning Rust into Python simply is not going to happen. It would be irresponsible.
Is complaining about syntax really productive though? What is really going to be done about it?
Developers have a way of running with a hyoe that can be quite disturbing and detrimental in the long run. The one difference here is that rust has some solid ideas implemented underneath. But the community proselytizing and throwing non-believers under the bus is quite real.
I know that Graydon Hoare is a fan of OCaml and that it was a core inspiration for Rust, and I sometimes wonder if he gets tripped up too by having to switch between Rust-inspired and OCaml-inspired interpretations of the same characters.
For what it's worth, I am not even sure that Graydon was the one who introduced lifetime syntax. He was a fan of terseness, though: Rust's keywords used to be all five characters or shorter.
Niko and pcwalton were the ones working on regions, Niko talks a little bit about the motivation for syntax here: https://smallcultfollowing.com/babysteps/blog/2012/03/28/avo...
Later posts include /& as syntax: https://smallcultfollowing.com/babysteps/blog/2012/04/25/ref...
Eventually, another syntax: https://smallcultfollowing.com/babysteps/blog/2012/07/10/bor... which turns into a &x/ syntax: https://smallcultfollowing.com/babysteps/blog/2012/07/17/bor...
Which turns into this one, talking about variants of possible syntax: https://smallcultfollowing.com/babysteps/blog/2012/12/30/lif...
At some point, we get the current syntax: https://smallcultfollowing.com/babysteps/blog/2013/04/04/nes...
So, it happened somewhere in here...
In general, using english words consisting of a-z is easier to read. Using regex-like mojibake is harder.
For an concrete example in rust, using pipes in lambdas, instead of an arrow, is aweful.
Rust is objectively not mojibake. The equivalent here would be like using a-z, as Rust's syntax is borrowed from other languages in wide use, not anything particularly esoteric. (Unless you could OCaml as esoteric, which I do believe is somewhat arguable but that's only one thing, the argument still holds for the vast majority of the language.)
I’ve seen COBOL in the wild. No thanks.
But also, imagine reading a math proof written in English words. That just doesn’t work well.
Concise and much clearer to read vs parentheses where you gotta wonder if the params are just arguments, or a tuple, etc. What are you talking about.
Coming from Python, I needed to work on some legacy Perl code. Perl code looks quite rough to a new user. After time, I got used to it. The syntax becomes a lot less relevant as you spend more time with the language.
Now I begrudge any time I have to go back to python. It feels like its beauty is only skin deep, but the ugly details are right there beneath the surface: prolific duck typing, exceptions as control flow, dynamic attributes. All these now make me uneasy, like I can’t be sure what my code will really do at runtime.
Rust is ugly but it’s telling you exactly what it will do.
I feel like this sentiment is from people who haven't really took the time to fully see what the Python ecosystem is.
Any language can have shittly written code. However languages that by default disallow it means that you have to spend extra time prototyping things, whereas in Python, you can often make things work without much issue. Dynamic typing and attributes make the language very flexible and easily adaptable.
The nice thing about Python is that it allows you to do either. And naturally, Python has gotten much faster, to the point where its as fast as Java for some things, because when you don't use dynamic typing, it actually recognizes this and optimizes compiled code without having to carry that type information around.
I’m not a python hater, you can’t get some great stuff done with it quickly. But my confidence in writing large complex systems in it is waning.
I don’t think the memory safety guarantees of Rust could be expressed in the syntax of a language like C or Go.
Example: You read the expression "x.f", say, in the output of git-diff. Is x a struct object, or a pointer to a struct? Only by referring to enclosing context can you know for sure.
I don’t think that’s the case, somehow most ML derived languages ended up with stronger type system and cleaner syntax.
I assume you’re talking about OCaml et al? I’m intruiged by it, but I’m coming from a Haskell/C++ background.
Rust is somewhat unique in terms of system language this because it’s the first one that’s not “simple” like C but still used for systems tools, more than Go is as far as I’m aware.
Which probably has to do with its performance characteristics being close to the machine, which Go cannot do (ie based on LLVM, no GC, etc)
Sometimes the different sets of angle and curly brackets adding up can look ugly at first, and maybe the anonymous function syntax of || {}, but it grows on you if you spend some time with the language (as do all syntaxes, in my experience).
Many features and stylistic choices from ML derivatives have made their way into Swift, Typescript, and other non-ML languages.
I often say that if you want to be a career programmer, it is a good idea to deeply learn one Lisp-type language (which will help with stuff like Python), one ML-type language (which will help with stuff like Rust) and one C-type language (for obvious reasons.)
[0] https://en.wikipedia.org/wiki/ML_(programming_language)
Anyway, to your point, I think a newbie could pick up the basics quickly and later learn more advanced things. In terms of speed, like 3 different times I've compared some Nim impl to a Rust impl and the Nim was faster (though "at the extreme" speed is always more a measure of how much optimization effort has been applied, esp. if the language supports inline assembly).
https://cython.org/ , which is a gradually typed variant of Python that compiles to C, is another decent possibility.
All the python programs I've had to maintain (I never choose python) have had major maintainability problems due to python's clean looking syntax. I can still look at crazy object oriented perl meta-programming stuff I wrote 20 years ago, and figure out what it's doing.
Golang takes another approach: They impoverished the language until it didn't need fancy syntax to be unambiguously readable. As a workaround, they heavily rely on codegen, so (for instance) Kubernetes is around 2 million lines of code. The lines are mostly readable (even the machine generated ones), but no human is going to be able to read them at the rate they churn.
Anyway, pick your poison, I guess, but there's a reason Rust attracts experienced systems programmers.
Given the constraint that they had to keep it familiar to C++ people, I'd say they did a wonderful job. It's like C++ meets OCaml.
Do you have any particular complaints about the syntax?
In fact there are some things about the syntax that are actually nice like range syntax, Unit type being (), match expressions, super explicit types, how mutability is represented etc.
I'd argue it's the most similar system level language to Kotlin I've encountered. I encourage you to power through that initial discomfort because in the process it does unlock a level of performance other languages dream of.
Beyond that issue, yeah most of Rust's syntactic noise comes from the fact that it is trying to represent genuinely complicated abstractions to support statically-checked memory safety. Any language with a garbage collector doesn't need a big chunk of Rust's syntax.
I'm learning rust and the sample code I frequently find is... cryptically terse. But the (unidiomatic, amateurish) code I write ironically reads a lot better.
I think rust attracts a former c/c++ audience, which then bring the customs of that language here. Something as simple as your variable naming (character vs c, index vs i) can reduce issues already.
While it's not a systems language, have you tried Swift?
The other features it shares with Rust are also shared by many other languages.
The question is: is it worth it?
With Rust for the answer is yes. The reliability, speed, data-race free nature of the code I get from Rust absolutely justifies the syntax quirks (for me!).
The reason is the same for any (including Perl, except those meme languages where obfuscation is a feature) language: the early adopters don't think it's unreadable.
Ironically, I find Python syntax frustrating. Imports and list comprehensions read half backwards, variable bindings escape scope, dunder functions, doc comments inside the function, etc.
I often see comparisons to languages like Python and Kotlin, but both encode far less information on their syntax because they don't have the same features as Rust, so there's no way for them to express the same semantics as rust.
Sure, you can make Rust look simpler by removing information, but at that point you're not just changing syntax, you're changing the language's semantics.
Is there any language that preserves the same level of type information while using a less "janky" syntax?
Maybe Ada, D or Nim might qualify?
I said this years ago and I was basically told "skill issue". It's unreadable. I shudder to think what it's like to maintain a Rust system at scale.
Python is a language that became successful not because it was the best in it's class, but because it was the least bad. It became the lingua franca of quantitative analysis, because R was even worse and matlab was a closed ecosystem with strong whiffs of the 80s. It became successful because it was the least bad glue language for getting up and running with ML and later on LLMs.
In comparison, Rust is a very predictable and robust language. The tradeoff it makes is that it buys safety for the price of higher upfront complexity. I'd never use Rust to do research in. It'd be an exercise in frustration. However, for writing reliable and robust systems, it's the least bad currently.
My R use was self-taught, as well. I refused to use proprietary software for school all through high school and university, so I used R where we were expected to use Excel or MatLab (though I usually used GNU Octave for the latter), including for at least one or two math classes. I don't remember anything being tricky or difficult to work with.
I'll grant my only exposure has been a two- or three-day "Intro to R" class but I ran screaming from that experience and have never touched it again.
It maybe worked against me that I am a programmer, not a statistician or researcher.
So is it just that the stdlib is really big and messy?
maybe unreadable is too strong of a word, but there is a valid point of it looking unapproachable to someone new
This seems kinda self-contracticting. Special symbols are there to make the syntax terse, not verbose. Perhaps your issue is not with how things are written, but that there's a lot of information for something that seems simpler. In other words a lot of semantic complexity, rather than an issue with syntax.
Using matklad's first example from his article on how the issue is more the semantics[1]
we can imagine a much less symbol-heavy syntax inspired by POSIX shell, FORTH, & ADA: and I think we'll all agree that's much less readable even though the only punctuation is `=` and `.`. So "symbol heavy" isn't a root cause of the confusion, it's trivial to make worse syntax with fewer symbols. And I like RPN syntax & FORTH.[1] https://matklad.github.io/2023/01/26/rusts-ugly-syntax.html
Sort of like training wheels, eventually you stop using it.
That's the thing though... Rust does build on many of those learnings. For starters, managing a big type system is better when some types are implicit, so Rust features type inference to ease the burden in that area. They've also learned from C++'s mistake of having a context sensitive grammar. They learned from C++'s template nightmare error messages so generics are easier to work with. They also applied learnings about immutability being a better default that mutability. The reason Rust is statically linked and packages are managed by a central repository is based on decades of seeing how difficult it is to build and deploy projects in C++, and how easy it is to build and deploy projects in the Node / NPM ecosystem. Pattern matching and tagged unions were added because of how well they worked in functional languages.
As for "Perl-esque unreadability" I submit that it's not unreadable, you are just unfamiliar. I myself find Chinese unreadable, but that doesn't mean Chinese is unreadable.
> Is there a "Kotlin for Rust"?
Kotlin came out 16 years after Java. Rust is relatively new, and it has built on other languages, but it's not the end point. Languages will be written that build on Rust, but that will take some time. Already many nascent projects are out there, but it is yet to be seen which will rise to the top.
Part of this is style and conventions though. I have implemented an STL container before, and that templating hell is far worse than anything I’ve ever seen in the Rust ecosystem. But someone following modern C++ conventions (e.g. a Google library) produces very clean and readable code.
I mean, Rust does. I builds on 20+ years of compiler and type system advancements. Then Syntax is verbose if you include all then things you can possibly do. If you stick to the basics it's pretty similar to most other languages. Hell, I'd say a lot of syntax Rust is similar to type-hinted Python.
Having said that, comparing a GC'd dynamic language to a systems programming language just isn't a fair comparison. When you need to be concerned about memory allocation you just need more syntax.
The reason python looks like procedural pseudocode is because it was designed to look like procedural pseudocode. Rust is not just a new skin over ALGOL with a few opinionated idioms, it's actually different - mostly to give hints of intention to the compiler, but I don't even think it started from the same place. It's more functional than anything imo, but without caring about purity or appearance, and that resulted in something that superficially looks sufficiently ALGOL-like to confuse people who are used to python or Kotlin.
> I genuinely believe that languages ought to start with "newbie friendliness", and would love to hear challenges to that idea.
In conclusion, I think this is a red herring. Computer languages are hard. What you're actually looking for is something that is ALGOL-like for people who have already done the hard work of learning an ALGOL-like. That's not a newbie, though. Somebody who learned Rust first would make the same complaint about python.
So it’s strange to hear a comparison. Maybe there’s something I’m missing.
It seems closer to C++ syntax than Perl.
I seriously don’t get it.
Where’s the “Perl-esqueness”?In the places where you need to add lifetime annotations, it's certainly useful to be able to see them in the types, rather than relegate them to the documentation like in C++; cf. all the places where C++'s STL has to mention iterator and reference invalidation.
The handler function isn't actually unnecessary, or at least, it isn't superfluous: by default, the signature would include 'a on self as well, and that's probably not what you actually want.
I do think that the example basically boils down to the lifetime syntax though, and yes, while it's a bit odd at first, every other thing that was tried was worse.
To clarify, I meant the 'a in `Box<dyn Handler + 'a>` in the definition of `process_handler` is unnecessary. I'm not saying that the <'a> parameter in the definition of Handler::handle is unnecessary, which seems to be what you think I said, unless I misunderstood.
You choose as your example a pretty advanced use case.
Given the abundance of the hundreds of deb-* and dh-* tools across different packages, it is surprising that apt isn’t more actively split into separate, independent tools. Or maybe it is, but they are all in a monorepo, and the debate is about how if one niche part of the monorepo uses Rust then the whole suite can only be built on platforms that support Rust?
It’s like arguing about the bike shed when everyone takes the bus except for one guy who cycles in every four weeks to clean the windows.That said eventually more modern languages will be dependencies of the tools one way or another (and they should). So probably Debian as a whole should come to a consensus on how that should happen, so it can happen in some sort of standard and fair fashion.
"why don't we just build a tool that can translate memory-safe Rust code into memory-unsafe C code? Then we don't have to do anything."
This feels like swimming upstream just for spite.
Fwiw, there're two such ongoing efforts. One[1] being an, written in C++, alternative Rust compiler that emits C (aka, in project's words, high-level assembly), the other[2] being a Rust compiler backend/plugin (as an extra goal to its initial being to compile Rust to CLR asm). Last one apparently is[3] quite modular and could be adapted for other targets too. Other options are continuing/improve GCC front-end for Rust and a recent attempt to make a Rust compiler in C[4] that compiles to QBE IR which can then be compiled with QBE/cc.
[1]: https://github.com/thepowersgang/mrustc [2]: https://github.com/FractalFir/rustc_codegen_clr [3]: https://old.reddit.com/r/rust/comments/1bhajzp/ [4]: https://codeberg.org/notgull/dozer
There's lunatics that want to replace basic Unix tools like sudo, etc, that are battle tested since ages which has been a mess of bugs till now.
Instead Rust should find it's niches beyond rewriting what works, but tackling what doesn't.
> There's lunatics ...
I think the problem is people calling developers "lunatics" and telling them which languages they must use and which software they must not rewrite.
Battle tested is not bulletproof: https://cybersecuritynews.com/sudo-linux-vulnerability/
Applying strict compile time rules makes software better. And with time it will also become battle tested.
My point is against rewrites of critical software for the point of rewriting it *insert my favorite language*. Zig is also a safer language than C, so are many other alternatives, yet the Zig community is not obsessed in rewriting old software but writing new one. And the Zig compiler has excellent C interop (in fact it can compile C/C++), yet the community is more focused in writing new software.
There are many factors that make software reliable, it's not just a matter of pretty types and memory safety, there's factors like platform/language stability, skill and expertise of the authors, development speed and feedback.
https://www.oligo.security/blog/new-sudo-vulnerabilities-cve...
And by the way, we had to replace almost all of the basic Unix tools at the turn of the century because they were completely unfit for purpose. There aren't many left.
This holds for many things in C
If it were correct, we wouldn't see these issues continue to pop up. But we do.
What would length represent? Bytes? Code points?
Anyway, I think what you are asking for already exists in the excellent ICU library.
And it's not a very easy thing to maintain. Unicode stuff changes more often than you might think and it can be political.
Of course there's a risk that new issues are introduced, but again, that depends a lot on the verification suite for the existing tool.
Also, just because someone did a port, doesn't mean it has to be adopted or that it should replace the original. That's open source / the UNIX mentality.
Supporting Rust attracts contributors, and those contributors are much less likely to introduce vulnerabilities in Rust when contributing vs alternatives.
not vulnerabilities in general.
To my knowledge, this isn’t the case.
> To my knowledge, this isn’t the case.
Tell us more?
Rust has unwinding (panics), C doesn’t.
There's nothing preventing it from being some specific invocation of a narrow set of compilers like gcc-only of some specific version range with a set of flags configuring the UB to match what's required. UB doesn't mean non-deterministic, it's simply undefined by the standard and generally defined by the implementation (and often something you can influence w/cli flags).
Yes, that's exactly what "translating to C" means – as opposed to "translating to the very specific C-dialect spoken by gcc 10.9.3 with patches X, Y, and Z, running on an AMD Zen 4 under Debian 12.1 with glibc 2.38, invoked with flags -O0 -g1 -no-X -with-Y -foo -blah -blub...", and may the gods have mercy if you change any of this!
Do you see the problem?
I don't have a problem with Rust, it is just a language, but it doesn't seem to play along well with the mostly C/C++ based UNIX ecosystem, particularly when it comes to dependencies and package management. C and C++ don't have one, and often rely on system-wide dynamic libraries, while Rust has cargo, which promotes large dependency graphs of small libraries, and static linking.
If you think this is a random decision caused by hype, cargo culting, or a maintainer's/canonical's mindless whims... please, have a tour through the apt codebase some day. It is a ticking time bomb, way more than you ever imagined such an important project would be.
Free software are much more like democracy, everyone can voice their opinion freely, and it tends to be messy, confrontational, nitpicky. It does often lead to slowing down changes, but it also avoids the common pitfall of authoritarian regime of going head first into a wall at the speed of light.
Opensource software doesn't have 1 governance model and most of it starts out as basically a pure authoritarian run.
It's only as the software ages, grows, and becomes more integral that it switches to more democratic forms of maintenance.
Even then, the most important OS code on the planet, the kernel, is basically a monarchy with King Linus holding absolute authority to veto the decision of any of the Lords. Most stuff is maintained by the Lords but if Linus says "no" or "yes" then there's no parliament which can override his decision (beyond forking the kernel).
Nice. So discrimination of poor users who are running "retro" machines because that is the best they can afford or acquire.
I knew of at least two devs who are stuck with older 32 bit machines as that is what they can afford/obtain. I even offered to ship them a spare laptop with a newer CPU and they said thanks but import duties in their country would be unaffordable. Thankfully they are also tinkering with 9front which has little to no issues with portability and still supports 32 bit.
Further, there are still several LTS linux distros (including the likes of Ubuntu and Debian) which don't have the rust requirement and won't until the next LTS. 24.04 is supported until 2029. Meaning you are talking about a 25 year old CPU at that point.
And even if you continue to need support. Debian based distros aren't the only ones on the plant. You can pick something else if it really matters.
15 years max; I can easily find documentation of Intel shipping Atom chips without 64-bit support in 2010, though I haven't found a good citation for when exactly that ended.
Looks like it was ultimately phased out in 2011.
It was only the first atom uarch that was 32. The next uarch (Saltwell) was 64.
Even then, people using ancient fifth-hand machines are almost certainly still going to run x86 - which means they'll have no trouble running Rust as 32-bit x86 is a supported target. Their bigger issue is going to be plain old C apps dropping 32-bit support!
"Retro" in this case genuinely means "horribly outdated". We're talking about systems with CPUs in the hundreds of MHz with probably fewer than a gigabyte in memory. You might do some basic word processing using Windows 95, but running anything even remotely resembling a modern OS is completely impossible. And considering their age and rarity, I'd be very impressed if anyone in a poor country managed to get their hands on it.
Are you trying to suggest there is a nontrivial community of people who cannot afford modern 64-bit Linux platforms, and opt for 9front on some ancient 32-bit hardware instead? Where are they coming from? Don't get me wrong, I love the 9 as much as the next guy, but you seem to paint it as some kind of affordability frontier...
One is in lives in Brazil and I think the other lives in the Middle East. They both have old second hand 32 bit laptops from the 00's.
> but you seem to paint it as some kind of affordability frontier...
Yes because there are people still using old hardware because they have no choice. Also, whats the problem with supporting old architectures? Plan 9 solved the portability problem and a prominent user recently ported it to cheap MIPS routers so we can run Plan 9 on cheap second hand network hardware. We have the tool chain support so we use it.
And believe me, I understand a raspberry pi or whatever is much faster and uses less power but I would rather we reduce e-waste where possible. I still run old 32 bit systems because they work and I have them.
It's not free, it's not easy, and it introduces hard to test and rarely run code paths that may or may not have problems on the target architecture.
I think there's a pretty strong argument for running hardware produced in the last 10 years for the next 10 or 20 years. However, I think it should be recognized that there was massive advances in compute power from 2000 to 2010 that didn't happen from 2010 to 2025.
A Core 2 Quad (produced in 2010) has ~ 1/2 the performance of the N150 (1/4 the single core performance of the latest AMD 9950).
Meanwhile a Pentium 3 from 2000 has roughly 1/10th the performance of the same Core 2 Quad.
There are simply far fewer differences between CPUs made in 2010 and today vs CPUs made in 2000 to 2010. Even the instruction set has basically become static at this point. AVX isn't that different from SSE and there's really not a whole bunch of new instructions since the x64 update.
I have stopped replacing machines (and smartphones) because they became outdated: the vast majority of compile tasks is finished in a fraction of a second, applications basically load instantly from SSD, and I never run out of RAM. The main limiting factor in my day-to-day use is network latency - and nothing's going to solve that.
My main machine is a Ryzen 9 3900X with 32GB of RAM and a 1TB SSD. And honestly? It's probably overkill. It's on the replacement list due to physical issues - not because I believe I'll significantly benefit from the performance improvements of a current-gen replacement. I'm hoping it'll last until AM6 comes around!
Every task is either "basically instantly", "finishes in a sip of coffee", or "slow enough for a pee break / email response / lunch break". Computers aren't improving enough to make my tasks upgrade to a faster category, so why bother?
>In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing.
I can understand the importance of safe signature verification, but how is .deb parsing a problem? If you're installing a malicious package you've already lost. There's no need to exploit the parser when the user has already given you permission to modify arbitrary files.
Also there is aspect of defence in depth. Maybe you can compromise one package that itself can't do much, but installer runs with higher priviledges and has network access.
Another angle -- installed package may compromise one container, while a bug in apt can compromise the environment which provisions containers.
And then at some point there is "oh..." moment when the holes in different layers align nicely to make four "bad but not exploitable" bugs into a zero day shitshow
Yes, .deb violates the cryptographic doom principle[1] (if you have to perform any cryptographic operation before verifying the message authentication code (or signature) on a message you’ve received, it will somehow inevitably lead to doom).
Their signed package formats (there are two) add extra sections to the `ar` archive for the signature, so they have to parse the archive metadata & extract the contents before validating the signature. This gives attackers a window to try to exploit this parsing & extraction code. Moving this to Rust will make attacks harder, but the root cause is a file format violating the cryptographic doom principle.
[1] https://moxie.org/2011/12/13/the-cryptographic-doom-principl...
APT does not verify package-level signatures (and nobody uses them anyway), so this is irrelevant.
How would adding Rust to such core dependencies not introduce new supply chain attack opportunities?
Dependencies are probably in the apt database and do not need parsing, but not everything is, or perhaps apt can install arbitrary .deb files now?
I sympathize with the maintainers of retro hardware. But honestly? Holding back the security and maintainability of a modern OS base layer just so an AlphaStation from 1998 can boot feels backwards.
The transition pain is real, and Canonical handled the communication poorly. But the 'legacy C tax' is eternal. We have to move critical infrastructure off it eventually.
IF there was some glaring problems with tools like sudo, sort, apt, and you have a superior version, sure, go ahead. But this is clearly not the case. Sometimes the rust version is just the same, or even inferior, but people are ready to plunge into destruction just to say their distro have the latest and greatest. Its just vanity.
Maybe the conspiracy theories that big tech finance the crazy incopetent people from in position of power in open source projects that they no longer can compete, in order to destroy them from the inside, is not that far fetched.
Hard Rust requirements from May onward
https://news.ycombinator.com/item?id=45779860
I think this is a nuanced call, personally, and I think there's some room for disagreements here. I just happen to believe that maybe the right decision is to fork at some point and spin off legacy forks when there's a vanishingly small suite of things that cause friction with progress.
> The sh4 port has never been officially supported, and none of the other ports have been supported since Debian 6.0.
Wikipedia tells me Debian 6 was released on 6 February 2011
The universalization from one developer's post to all Rust "fanatics" is itself an unwelcome attack. I prefer to keep my discussion as civilized as possible.
Just criticize the remark.
Perhaps this reading is colored by how this same pair of sentiments seems to come up practically every single time there's a push to change the language for some project.
Building large legacy projects can be difficult and tapping into a thriving ecosystem of packages might be a good thing. But it's also possible to have "shiny object" or "grass is greener" syndrome.
If that’s not arrogant, I don’t know what is.
This is forceful, assertive, and probably makes people angry.
Does the speaker have the authority to make this happen? Because if so, this is just a mandate and it's hard to find some kind of moral failing with a change in development direction communicated clearly.
This seems like a long window, given to ports to say, "we are making changes that may impact you, heads up." The options presented are, frankly, the two primary options "add the dependency or tell people you are no longer a current port".
It's just a roundabout way of saying "anything that isn't running Rust isn't a REAL computer". Which is pretty clearly an arrogant statement, I don't see any other way of interpreting it.
Arguing that support for certain architectures should be removed because they see very little real world use is totally valid. But it's possible to do so in a respectful way, without displaying such utter contept for anyone who might disagree.
You can't go around screaming "your code SUCKS and you need to rewrite it my way NOW" at everyone all the time and expect people to not react negatively.
It seems you are imagining things and hate people for the things you imagined.
In reality there are situations where during technical discussions some people stand up and with trembling voice start derailing these technical discussions with "arguments" like "you are trying to convince everyone to switch over to the religion". https://youtu.be/WiPp9YEBV0Q?t=1529
If you search for HN posts with C++ in the title from the last year, the top post is about how C++ sucks and Rust is better. The fourth result is a post titled "C++ is an absolute blast" and the comments contain 128 (one hundred and twenty eight) mentions of the word "Rust". It's ridiculous.
(Maybe you mean this in some general sense, but the actual situation at hand doesn't remotely resemble a hostile unaffiliated demand against a project.)
To who is this addressed?
> If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port.
Because that sure reads as a compulsion to me.
I have to decline and explain in social settings all the time, because I will not eat meat served to me. But I do not need to preach when I observe others eating meat. I, like all humans, have a finite amount of time and energy. I'd rather spend that time focused on where I think it will do the greatest good. And that's rarely explaining why factory farming of meat is truly evil.
The best time is when someone asks, "why don't you eat meat?" Then you can have a conversation. Otherwise I've found it best to just quietly and politely decline, as more often than not one can be accommodated easily. (Very occasionally, though, someone feels it necessary to try and score imaginary points on you because they have some axe to grind against vegetarians and vegans. I've found it best to let them burn themselves out and move on. Life's too short to worry about them.)
You think it's wrong abusing animals. Why would you relate that only to you and think it would be ok for others to abuse them? You wouldn't
> was no room for a change in plan
yes, pretty much
at least the questions about it breaking unofficial distros, mostly related to some long term discontinued architectures, should never affect how a Distro focused on current desktop and server usage develops.
if you have worries/problems outside of unsupported things breaking then it should be obvious that you can discuss them, that is what the mailing list is for, that is why you announce intend beforehand instead of putting things in the change log
> complained that Klode's wording was unpleasant and that the approach was confrontational
its mostly just very direct communication, in a professional setting that is preferable IMHO, I have seen too much time wasted due to misunderstandings due to people not saying things directly out of fear to offend someone
through he still could have done better
> also questioned the claim that Rust was necessary to achieve the stronger approach to unit testing that Klode mentioned:
given the focus on Sequoia in the mail, my interpretation was that this is less about writing unit tests, and more about using some AFIK very well tested dependencies, but even when it comes to writing code out of experience the ease with which you can write tests hugely affects how much it's done, rust makes it very easy and convenient to unit test everything all the time. That is if we speak about unit tests, other tests are still nice but not quite at the same level of convenience.
> "currently has problems with rebuilding packages of types that systematically use static linking"
that seems like a _huge_ issue even outside of rust, no reliable Linux distros should have problems with reliable rebuilding things after security fixes, no matter how it's linked
if I where to guess there this might be related to how the lower levels of dependency management on Linux is quite a mess due to requirements from 90 no longer relevant today, but which some people still obsess over.
To elaborate (sorry for the wall of text) you can _roughly_ fit all dependencies of a application (app) into 3 categories:
1. programs the system provides (opt.) called by the app (e.g. over ipc, or spawning a sub process), communicating over well defined non language specific protocols. E.g. most cmd-line tools, or you systems file picker/explorer should be invoked like that (that it often isn't is a huge annoyance).
2. programs the system needs to provide, called using a programming language ABI (Application Binary Interface, i.e. mostly C ABI, can have platform dependent layout/encoding)
3. code reused to not rewrite everything all the time, e.g. hash maps, algorithms etc.
The messy part in Linux is that for historic reasons the later two parts where not treated differently even through they have _very_ different properties wrt. the software live cycle. For the last category they are for your code and specific use case only! The supported versions usable with your program are often far more limited: Breaking changes far more normal; LTO is often desirable or even needed; Other programs needing different incompatible versions is the norm; Even versions with security vulnerabilities can be fine _iff_ the vulnerabilities are on code paths not used by your application; etc. The fact that Linux has a long history of treating them the same is IMHO a huge fuck up.
It made sense in the 90th. It doesn't anymore since ~20 years.
It's just completely in conflict with how software development works in practice and this has put a huge amount of strain on OSS maintainers, due to stuff like distros shipping incompatible versions potentially by (even incorrectly) patching your code.... and end users blaming you for it.
IMHO Linux should have a way to handle such application specific dependencies in a all cases from scripting dependencies (e.g. python), over shared object to static linking (which doesn't need any special handling outside of the build tooling).
People have estimated the storage size difference of linking everything statically, and AFIK it's irrelevant in relation to availability and pricing on modern systems.
And the argument that you might want to use a patched version of a dependency "for security" reasons fails if we consider that this has lead to security incidents more then one time. Most software isn't developed to support this at all and bugs can be subtle and bad to a point of a RCE.
And yes there are special cases, and gray areas in between this categories.
E.g. dependencies in the 3rd category you want to be able to update independently, or dependencies from the 2nd which are often handled like the 3rd for various piratical reasons etc.
Anyway coming back the the article Rust can handle dynamic linking just fine, but only for C ABI as of now. And while rust might get some form of RustABI to make dynamic linking better it will _never_ handle it for arbitrary libraries, as that is neither desirable nor technical possible.
---
EDIT: Just for context, in case of C you also have to rebuild all header only libraries using pre-processor macros, not doing so is risky as you now mix different versions of the same software in one build. Same (somewhat) for C++ with anything using template libraries. The way you can speed it up is by caching intermediate build artifacts, that works for rust, too.
It's a Trojan horse language. There are no demands from C users that anyone donate to C non-profits. Much better, safer language to use from an ecosystem perspective.
Where are you seeing this happen? I'm curious because I never have, which means that I'm missing out on discussions somewhere.
> > > Multi trillion dollar conglomerate invests a minuscule fraction of a fraction of their monthly revenue into the nonprofit foundation that maintains the tool that will save them billions
> > 115 Billion / 365 days / 1440 minutes = ~ 220k
> > So they make around 220k per minute, 350k in under 2 minutes. Still, any amount is better than nothing at all.
> I genuinely hate the thought of "Better than nothing". We should be saying "go big or go home."
Pretty popular thread. Approximate 900 upvotes over those three comments.
> That's like a millisecond of Google revenue.
and separately
> I mean thats nice. But in all honesty, is 1 Million still a lot?
That's from a while ago so it's smaller. But you can tell the sentiment is rising because these expressions of "it's not enough" are becoming more popular. Just a matter of time before the community tries to strong-arm other orgs with boycotts and this and that. We've seen this before.
> > Cool, but also depressing how relatively small these “huge” investments in core technologies actually are.
> Yeah, seriously. This is comparatively like me leaving a penny in the “take a penny leave a penny” plate at the gas station.
and separately
> Ah yes, the confidence displayed by allocating 0.0004% of your yearly revenue.
> Satya alone earns that in a 40 hr work week.
It's a pretty old playbook to use free-software language to get one's technology entrenched, then there's murmurs about how not enough money is being sent back to the people making it, the organization then uses the community as the stalking horse to promote this theory, and then finally comes the Elastic License relicensing.
Elastic did it. MongoDB did it. Hashicorp did it. Redis did it. I get the idea, but we should pre-empt the trojan horse when we see it coming. I know I know. You can fork when it happens etc. but I'm not looking forward to switching my toolchain to "Patina" or whatever they call the fork.
And if you think I'm some guy with an axe to grind, I have receipts for Rustlang enthusiasm:
https://news.ycombinator.com/item?id=24127438
https://news.ycombinator.com/item?id=24328686
https://news.ycombinator.com/item?id=30756184
https://news.ycombinator.com/item?id=37573389
https://news.ycombinator.com/item?id=41639619
List of donation posts follows:
https://old.reddit.com/r/rust/comments/1noyqak/media_google_...
https://old.reddit.com/r/rust/comments/1ajm56w/google_donate...
https://old.reddit.com/r/rust/comments/1cnehqt/microsofts_1m...
I agree those people are being pretty ridiculous.
> And if you think I'm some guy with an axe to grind
Nah, I was just like "hmm, I don't remember really seeing that, interesting." An actual honest question, no shade implied.
There is already an initrd package tool I can't use since it is rust based, but I don't use initrd on that machine so it is not a problem so far.
The computer runs modern linux just fine, I just wish the rust team would at least release an "i386" boostrap binary that actually works on all i386 like all of the other compilers.
"We don't care about retro computers" is not a good argument imho, especially when there is an easy fix. It was the same when the Xorg project patched out support for RAMDAC and obsoleted a bunch of drivers instead of fixing it easily. I had to fix the S3 driver myself to be able to use my S3 trio 64v+ with a new Xorg server.
/rant off
You can grab a $150 NUC which will run circles around this dual pentium pro system while also using a faction of the power.
You obviously have to do a lot of extra work, including having a second system, just to keep this old system running. More work than it'd take to migrate to a new CPU.
[1] https://www.amazon.com/KAMRUI-AK1PLUS-Processor-Computer-Eth...
I grew up without money, it makes me laugh when I read comments like this. You can just, yeah when you're fortunate enough to have a strong support system; you can.
My understanding is that the systems are not meaningfully common, and are hobbyist archs. But the idea that dropping support is fine because you can just throw money at it is so incredibly divorced from reality that I actually feel bad for anyone that believes this.
I deeply believe that if you don't like what a maintainer of FOSS code has done, you should fork the project. Admittedly that's a very onerous suggestion. But more important than that, you should help people when you can. If you're deciding to drop support for a bunch of people because it makes your job easier or simpler, when you don't need to. You're the bad guy in the story. That's the way this announcement has been written, and most reasonable people object to that kind of behavior. Selfishness should feel a bit offensive to everyone.
I have plenty of relatives without money or resources and $150 is something they can all afford.
It's not even the floor of the amount of money needed (Here's a used NUC for $30 [1]), but rather just showing that a new system can be had for a lot less than many people expect.
You are the one divorced from reality if you think there's an army of poor orphans running modern linux on pentium pros.
Affording rent and health insurance is a FAR bigger issue than being able to throw a little money towards a new computer once every 10 years.
[1] https://www.ebay.com/itm/366000004972?_skw=NUC&itmmeta=01KAY...
As to why it should cater to it, it's more that there is no need to remove something that already works just to remove it.
It is possible to compile rustc on another system so it supports i586 and below. Just a small change in the command line options. And it doesn't degrade the newer systems.
I have plenty of faster machines, I just enjoy not throwing things away or making odd systems work. It's called having fun :)
There actually is. Support for old systems isn't free. Mistakes in the past are hard to fix and verify on these old systems. Particularly, the fact that there's not a whole lot of devs with access to dual pentium pro systems to verify changes which would affect such systems.
That means that if there's a break in the kernel or elsewhere that ultimately impacts such system they'll hear from a random retro computing enthusiast which takes time from everyone to resolve and review patches to fix the retro computer.
Time is precious for open source software. It's in limited supply.
I get doing this for fun or the hell of it. But you do need to understand there are costs involved.
Wikipedia seems to correlate: https://en.wikipedia.org/wiki/Pentium_Pro, as do discussions on CMOV: https://stackoverflow.com/a/4429563
https://mastodon.gamedev.place/@TomF/115589875974658415
https://news.ycombinator.com/item?id=46009962
Edit: I see from the sister post that it is actually llvm and not rust, so I'm half barking up the wrong tree. But somehow this is not an issue with gcc and friends.
It absolutely is. If you want to do the work to support <open source software> for <purpose> you're welcome to do so, but you aren't entitled to have other people do so. There's some narrow exceptions like accessibility support, but retro computing ain't that.