Just curious - when’s the last time you compiled the kernel yourself? Do you remember how long it took? And that was all just C, which - while not exactly fast - is at least an order of magnitude faster to compile than Rust.
I’m seriously concerned that if Linux rally slowly does become predominantly Rust, development will stop, because nobody without access to a server farm will be able compile it in any reasonable amount of time.
Rust would be better suited to a micro kernel, where the core is small and subsystems can be isolated and replaced at run time.
Edit: adding a more modern language isn’t a bad idea, IMHO, I just think it should be something like Zig, which has reasonable compile times and no runtime. Zig’s too young, but by the time it’s considered mature, Rust will either be entrenched, or such a disaster that it’ll be decades before kernel developers consider letting a new language in.
Of course compiling something without checks is safe. If that’s your standard, we should write the kernel in JS, Python, Ruby, LUA or any other dynamically typed language since there’s no compilation time.
Progress means I don’t have to read blog posts in order to compile the kernel. Progress means I have a sane toolchain that lets me run, test, debug, manage dependencies, and even distribute my code and artefacts (documentation, compile output, …) easily. Progress means catching many more bugs at compile-time instead of runtime.
You’re throwing the baby out with the bath water with the reductio ad absurdum argument. Rust may very well be less secure than Ada - if so, then does that make it not good enough?
I say it’s not worth trading some improvement in safety for vastly longer compile times and a more cognitively complex - harder - language, which increases the barrier of entry for contributors. If the trade were more safety than C, even if not as good as Rust, but improved compile times and a reasonable comprehensibility for non-experts in the language, that’s a reasonable trade.
I have never written a line of code in Zig, but I can read it and derive a pretty good idea of what the syntax means without a lot of effort. The same cannot be said for Rust.
I guess it doesn’t matter, because apparently software developers will all be replaced by AI pretty soon.
I have never written a line of code in Zig, but I can read it and derive a pretty good idea of what the syntax means without a lot of effort. The same cannot be said for Rust.
That’s you dawg. You probably have a different background, because I can follow zig code, but have no idea what a bunch of stuff means.
pub fn enqueue(this: *This, value: Child) !void { , !void? It’s important to return void? Watch out void is being returned? Does that mean that you can write !Child ? And what would that even mean?
const node = trythis.gpa.create(Node); what does try mean there? There’s no catch, no except. Does that mean it just kills the stack and throws the exception until it reaches a catch/except? If not, why put a try there? Is that an indication that it it can throw?
node.* = .{ .data = value, .next = null }; excuse me what? Replace the contents of the node object with a new dict/map that has the keys .data and .next?
if (this.end) |end| end.next = node // what’s the lambda for? And what’s the // for ? A forgotten comment or an operator? If it’s to escape newline, why isn’t it a backslash like in other languages?
start: ?*Node. Question pointer? A nullable pointer? But aren’t all pointers nullable? Or does zig make a distinction between zero pointers and nullable pointers?
this.start orelse returnnull is this a check for null or a check for 0 or both?
However when I read rust the first time, I had quite a good idea of what was going on. Pattern matching and move were new, but traits were quite understandable coming from Java with interfaces. So yeah, mileage varies wildly and just because you can read Zig, doesn’t mean the next person can.
Regardless, it’s not like either of us have any pull in the kernel (and probably never will). I fear for the day we let AI start writing kernel code…
Granted, everyone is different. The cognitive load of Rust has been widely written about, though, so I don’t think I’m am outlier.
Regardless, it’s not like either of us have any pull in the kernel (and probably never will). I fear for the day we let AI start writing kernel code…
Absolutely never, in my case. This isn’t what concerns me, though. If Rust is harder than C, then fewer people are going to attempt it. If it takes several hours to compile the kernel on an average desktop computer, even fewer are going to be willing to contribute, and almost nobody who isn’t creating a distribution is ever going to even try to compile their own kernel. It may even dissuade people from trying to start new distributions.
If, if, if. Maybe it seems as if I’m fear-mongering, but as I’ve commented elsewhere, I noticed that when looking for tools in AUR, I’ve started filtering out anything written in Rust unless it’s a -bin. It’s because at some point I noticed that the majority of the time spent upgrading software on my computer was spent compiling Rust packages. Like, I’d start an update, and every time I checked, it’d be in the middle of compiling Rust. And it isn’t because I’m using a lot of Rust software. It has had a noticeable negative impact on the amount of time my computer spends with the CPU pegged upgrading. God forgive me, I’ve actually chosen Node-based solutions over Rust ones just because there was no -bin for the Rust package.
I don’t know if this is the same type of “cancer” in the vitriolic Kernel ML email that led to the second-to-last firestorm, but this is how I’ve started to feel about Rust - if there’s a bin, great! But no source-based packages, because then updating my desktop starts to become a half-day journey. I’m almost to the point of actively going in and replacing the source-based Rust tools with anything else, because it’s turning updating my system into a day-long project.
Haskell is already in this corner. Between the disk space and glacial ghc compile times, I will not install anything Haskell unless it’s pre-compiled. And that’s me having once spent a year in a job writing Haskell - I like the language, but it’s like programming in the 70’s: you write out your code, submit it as a job, and then go do something else for a day. Rust is quickly joining it there, along with Electron apps, which are in the corner for an entirely different reason.
Zig is designed as a successor to C, no? So i assume it does syntax and things quite similarly. Rust is not a C-like language, so i dont think this a fair comparison at all.
But in the end, learning syntax isnt the hard part of a new language (even if it is annoying sometimes).
learning syntax isnt the hard part of a new language
No, it’s not, and that’s worse, not better. Understanding the pitfalls and quirks of the language, the gotchas and dicey areas where things can go wrong - those are the hard parts, and those are only learned through experience. This makes it even worse, because only Rust experts can do proper code reviews.
TBF, every language is like this. C’s probably worse in the foot-gun areas. But the more complex the language, the harder it is for people to get over that barrier of entry, and the fewer that will try. This is a problem of exclusion, and a form of gate keeping that’s designed - unintentionally - into the language.
All the different tests ive seen comparing Rust and C put compile times in the same ballpark. Even if somehow every test is unrepresentative of real-world compile times, I doubt it is “order[s] of magnitude” worse.
I remember watching someone test the performance of host a HTTP webpage and comparing the performance of Zig, Rust w/ C HTTP library, and Rust native. Rust native easily beat them out and was able to handle like 10s of thousands more client connections. While I know this isnt directly relevant to Kernels, the most popular C HTTP library is most likely quite optimized.
Memory related vulnerabilities are consistently in the top reported vulnerabilities. It is a big deal, and no, you can’t just program around it. Everyone makes mistakes, has a bad day, or something on their mind. Moments of human fallibility. Eliminating an entire class of the vulnerabilites while staying competitive with C is a hard task, but entirely worth doing.
I was curious, so I ran a not-very-scientific test based on packages built from source on my desktop. The short version of a very long post (in the repos) is that the median build time for Rust packages is, indeed, pretty close to an order of magnitude greater than that of the C packages.
One caveat is that Rust (at least, through cargo) imports dependencies at build time, and that download/compile is included whereas with C dependencies are already installed in other packages or are included. Go behaves the same as Rust, and yet the median build times are even shorter than C, so it can’t all be blamed on dependency downloads, but it also can’t be ignored.
For my purposes, it makes no difference because upgrading software on my computer always includes this build time penalty for Rust programs - and this is why Rust programs, while being a fraction (50/800) of all from-source packages installed on my system consume a disproportionately large amount of the time it takes for me to do updates from AUR. And that’s why I’ve started ignoring packages that depend on cargo or rustc.
I used Gentoo in ancient time when kernel updates took a whole day.
A modern computer can rebuild in an hour, a good one, even faster.
I’m not a kernel developer, but I don’t think they need to rebuild the whole kernel for every iteration.
And as for Rust, I’m doing bioinformatics in Rust because our iteration time is order of magnitudes longer than a kernel build, and Rust reduced the number of iterations required to reach the final version.
Rust run times are excellent. And statically linked binaries are the superior intellect.
Runtime performance counts for me only some specific cases, and there are many programs I have installed that I recompile because of updates far more frequently than I run them; and when I do run them, rarely use performance an issue.
But you have a good point: performance in the kernel is important, and it is run frequently, so the kernel is a good use case for Rust - where Go, perhaps, isn’t. My original comment, though, was that Zig appears to have many of the safety benefits of Rust, but vastly better compile times.
I really do need to write some Zig projects, because I sound like an advocate when really my opinions are uninformed. I have written Rust, though, and obviously have opinions about it, and especially how it is affecting my system update times.
I’ll keep ripgrep, regardless of compile times. Probably fd, too.
It is easier to safely optimize Rust than C, but that was not the point.
The point was on correctness of code.
It is not unheard of for code to run for weeks and months. I need the code to be as bug free as possible.
For example, when converting one of our tools to Rust we found out a bug that will lead to the wrong results on big samples. It was found by the Rust compiler!
Our tests didn’t cover the bug because it will only happen on very big sample. We can’t create a test file of hundreds of GB by hand and calculate the expected result. Our real data would have triggered the bug.
So without moving to Rust we would have gotten the wrong results.
So, a couple of thoughts. You can absolutely write safe code that produces wrong results. Rust doesn’t help - at all - with correctness. Even Rustaceans will agree on that point.
I agree that Rust is safer than C; my point is that if correctness and safeness is the deciding criteria, then why not use Haskell? Or Ada? Both are more “safe” even than Rust, and if you’re concerned about correctness, Haskell is a “provable” language, and there are even tools for performing correctness analysis on Haskell code.
But those languages are not allowed in the kernel, and - indeed - they’re not particularly popular; certainly not in comparison to C, Go, or Rust. There are other factors than just safety and correctness; otherwise, something like OCaml would probably be a dominant language right now.
Rust let us abstract even file types (path to a fastq file, fasta file, annotations, etc) with no run time costs.
This eliminate many bugs at compile time.
You may say that we can get it in C too, and you will be correct. But in C we spend our time on herding pointers.
Research is given X money for N months (sort of), so we have time constraints on development time.
If we do bit wise work, the compiler tests our base types.
Not to mention multithreading just works.
Even big projects like BLAST had bugs that led to wrong results due C/CPP horrible multithreading.
We encountered two more tools that had similar bugs.
I think that if someone ever does a meta-studies of research code written in C it may get papers retracted.
No, but just for you I spent time today extracting a list of ~250 packages installed from source on my computer, and tomorrow, I’m going to clean re-install all of them, timed, and post the results.
There’s a mix of languages in there, and many packages have multiple language dependencies, but I’m going by the “Make Deps” package requirements and will post them.
There will probably be too many variables for a clean comparison, but I know I have things like multiple CSV and json CLI toolkits in different languages installed, so some extrapolations should be possible.
C is hard, because a lot of packages that must depend on gcc don’t include it in the make dependencies; they must assume everyone has at least one C compiler installed. A couple of packages explicitly depend on clang, so I’ll have that at least.
Make a YouTube on it and I’ll watch it. I’m not a coder though. But benchmarking and debunking is interesting. Either way it goes. Clear or complex the results come out it’ll be interesting.
Honestly sounds great! Look forward to the results. I do think Linux compile times matters personally, and the time save on development because the compiler is doing checks as well isn’t a perfect one to one for this project, because people like myself compile the kernel way more than we dev for kernel. Adding and removing stuff to trim it down for various platforms.
During your compiling it would be interesting if you can find some rust flags that might disable checks to speed things up. Maybe there is a conf that skips the things downstream users can assume the actual devs ran?
K, I spent way more time on this than I wanted to, but it’s here. There’s a lot to read, but here’s the graph:
The readme explains everything I did to try to make it reasonably fair.
Note that the graph X-axis is logarithmic; the median compile time for Rust packages is an order of magnitude more than C or Go.
The sample size is pretty close to 50 packages in each language, and I made my best attempt to ensure each package included used only one compiled language. Without a lot more work, there wasn’t much more I could do to get an apples-to-apples comparison.
One thing to note is that I downloaded all package sources outside of the timing step. Rust, specially with cargo packages, downloads many dependencies in the build() phase, whereas with C they’re mostly already downloaded. So a significant amount of Rust build time it’s actually downloading and compiling dependencies, which it has to do for each virgin build. Whether that make this an unfair comparison is debatable; I will point out that Go, however, does exactly the same thing: library dependencies are downloaded and compiled at build time, same as Rust. This makes the Go median even more impressive, but has no bearing on the Rust v. C discussion.
A final note: entirely unintentionally, I apparently have no from-source Zig programs installed (via AUR). I don’t know what to make of that. Is it really that far behind in popularity?
Anyway, all of the source and laborious explanation is there; if you’re running Arch, you could perform the same analysis and most of the work is already done for you. You just need 4 pieces of non-standard software, two of which are probably regardless installed on your machine. Be aware, however: on my desktop it took 12 hours to re-download and clean build the 276 qualifying AUR packages in my system, so it’s a long metric to run.
Wow I read through the blog post and though I’m not a developer I’ve compiled and built Linux packages and operating systems in the past so now I want to fly home and give your script a go myself.
I enjoyed your write up. I can’t comment on programming, but I enjoy a good journey and story.
My final takeaway is your image. I’ll keep it in mind. Interesting!
You read through all that? Wow. Good on you! Even I didn’t re-read it, so there are probably typos all over.
Yeah, the code isn’t interesting. It’s just a bunch of zsh hacked together; I wouldn’t be surprised if you encounter issues running it. The only thing I’m pretty sure of is that it won’t break anything.
Good luck. If you do run it and get a graph, please post it. I’m interested to see results from other systems. Note that the script generates an svg, so you’ll need to convert it to png to post it, or just go on and edit the csvtk graph command and change the svg suffix to png and it’ll create a png for you.
Also, I meant to do this in the README: a huge shout-out to the author of csvtk. It’s a fantastic tool, and I only just discovered the graph command which does so much. It has a built-in, simple pivot table function (a group argument) that replaced a whole other tool and process step. Seriously nice piece of software for working with CSV.
Progress?
Just curious - when’s the last time you compiled the kernel yourself? Do you remember how long it took? And that was all just C, which - while not exactly fast - is at least an order of magnitude faster to compile than Rust.
I’m seriously concerned that if Linux rally slowly does become predominantly Rust, development will stop, because nobody without access to a server farm will be able compile it in any reasonable amount of time.
Rust would be better suited to a micro kernel, where the core is small and subsystems can be isolated and replaced at run time.
Edit: adding a more modern language isn’t a bad idea, IMHO, I just think it should be something like Zig, which has reasonable compile times and no runtime. Zig’s too young, but by the time it’s considered mature, Rust will either be entrenched, or such a disaster that it’ll be decades before kernel developers consider letting a new language in.
Of course compiling something without checks is safe. If that’s your standard, we should write the kernel in JS, Python, Ruby, LUA or any other dynamically typed language since there’s no compilation time.
Progress means I don’t have to read blog posts in order to compile the kernel. Progress means I have a sane toolchain that lets me run, test, debug, manage dependencies, and even distribute my code and artefacts (documentation, compile output, …) easily. Progress means catching many more bugs at compile-time instead of runtime.
Anti Commercial-AI license
You’re throwing the baby out with the bath water with the reductio ad absurdum argument. Rust may very well be less secure than Ada - if so, then does that make it not good enough?
I say it’s not worth trading some improvement in safety for vastly longer compile times and a more cognitively complex - harder - language, which increases the barrier of entry for contributors. If the trade were more safety than C, even if not as good as Rust, but improved compile times and a reasonable comprehensibility for non-experts in the language, that’s a reasonable trade.
I have never written a line of code in Zig, but I can read it and derive a pretty good idea of what the syntax means without a lot of effort. The same cannot be said for Rust.
I guess it doesn’t matter, because apparently software developers will all be replaced by AI pretty soon.
That’s you dawg. You probably have a different background, because I can follow zig code, but have no idea what a bunch of stuff means.
See samples
pub fn enqueue(this: *This, value: Child) !void {
,!void
? It’s important to returnvoid
? Watch outvoid
is being returned? Does that mean that you can write!Child
? And what would that even mean?const node = try this.gpa.create(Node);
what doestry
mean there? There’s nocatch
, noexcept
. Does that mean it just kills the stack and throws the exception until it reaches acatch/except
? If not, why put a try there? Is that an indication that it it can throw?node.* = .{ .data = value, .next = null };
excuse me what? Replace the contents of thenode
object with a new dict/map that has the keys.data
and.next
?if (this.end) |end| end.next = node //
what’s the lambda for? And what’s the//
for ? A forgotten comment or an operator? If it’s to escape newline, why isn’t it a backslash like in other languages?start: ?*Node
. Question pointer? A nullable pointer? But aren’t all pointers nullable? Or does zig make a distinction between zero pointers and nullable pointers?this.start orelse return null
is this a check for null or a check for 0 or both?However when I read rust the first time, I had quite a good idea of what was going on. Pattern matching and
move
were new, but traits were quite understandable coming from Java with interfaces. So yeah, mileage varies wildly and just because you can read Zig, doesn’t mean the next person can.Regardless, it’s not like either of us have any pull in the kernel (and probably never will). I fear for the day we let AI start writing kernel code…
Anti Commercial-AI license
Granted, everyone is different. The cognitive load of Rust has been widely written about, though, so I don’t think I’m am outlier.
Absolutely never, in my case. This isn’t what concerns me, though. If Rust is harder than C, then fewer people are going to attempt it. If it takes several hours to compile the kernel on an average desktop computer, even fewer are going to be willing to contribute, and almost nobody who isn’t creating a distribution is ever going to even try to compile their own kernel. It may even dissuade people from trying to start new distributions.
If, if, if. Maybe it seems as if I’m fear-mongering, but as I’ve commented elsewhere, I noticed that when looking for tools in AUR, I’ve started filtering out anything written in Rust unless it’s a -bin. It’s because at some point I noticed that the majority of the time spent upgrading software on my computer was spent compiling Rust packages. Like, I’d start an update, and every time I checked, it’d be in the middle of compiling Rust. And it isn’t because I’m using a lot of Rust software. It has had a noticeable negative impact on the amount of time my computer spends with the CPU pegged upgrading. God forgive me, I’ve actually chosen Node-based solutions over Rust ones just because there was no -bin for the Rust package.
I don’t know if this is the same type of “cancer” in the vitriolic Kernel ML email that led to the second-to-last firestorm, but this is how I’ve started to feel about Rust - if there’s a bin, great! But no source-based packages, because then updating my desktop starts to become a half-day journey. I’m almost to the point of actively going in and replacing the source-based Rust tools with anything else, because it’s turning updating my system into a day-long project.
Haskell is already in this corner. Between the disk space and glacial ghc compile times, I will not install anything Haskell unless it’s pre-compiled. And that’s me having once spent a year in a job writing Haskell - I like the language, but it’s like programming in the 70’s: you write out your code, submit it as a job, and then go do something else for a day. Rust is quickly joining it there, along with Electron apps, which are in the corner for an entirely different reason.
Zig is designed as a successor to C, no? So i assume it does syntax and things quite similarly. Rust is not a C-like language, so i dont think this a fair comparison at all.
But in the end, learning syntax isnt the hard part of a new language (even if it is annoying sometimes).
No, it’s not, and that’s worse, not better. Understanding the pitfalls and quirks of the language, the gotchas and dicey areas where things can go wrong - those are the hard parts, and those are only learned through experience. This makes it even worse, because only Rust experts can do proper code reviews.
TBF, every language is like this. C’s probably worse in the foot-gun areas. But the more complex the language, the harder it is for people to get over that barrier of entry, and the fewer that will try. This is a problem of exclusion, and a form of gate keeping that’s designed - unintentionally - into the language.
All the different tests ive seen comparing Rust and C put compile times in the same ballpark. Even if somehow every test is unrepresentative of real-world compile times, I doubt it is “order[s] of magnitude” worse.
I remember watching someone test the performance of host a HTTP webpage and comparing the performance of Zig, Rust w/ C HTTP library, and Rust native. Rust native easily beat them out and was able to handle like 10s of thousands more client connections. While I know this isnt directly relevant to Kernels, the most popular C HTTP library is most likely quite optimized.
Memory related vulnerabilities are consistently in the top reported vulnerabilities. It is a big deal, and no, you can’t just program around it. Everyone makes mistakes, has a bad day, or something on their mind. Moments of human fallibility. Eliminating an entire class of the vulnerabilites while staying competitive with C is a hard task, but entirely worth doing.
Just FYI, because I think this was in a different thread:
https://midwest.social/comment/15427570
I was curious, so I ran a not-very-scientific test based on packages built from source on my desktop. The short version of a very long post (in the repos) is that the median build time for Rust packages is, indeed, pretty close to an order of magnitude greater than that of the C packages.
One caveat is that Rust (at least, through cargo) imports dependencies at build time, and that download/compile is included whereas with C dependencies are already installed in other packages or are included. Go behaves the same as Rust, and yet the median build times are even shorter than C, so it can’t all be blamed on dependency downloads, but it also can’t be ignored.
For my purposes, it makes no difference because upgrading software on my computer always includes this build time penalty for Rust programs - and this is why Rust programs, while being a fraction (50/800) of all from-source packages installed on my system consume a disproportionately large amount of the time it takes for me to do updates from AUR. And that’s why I’ve started ignoring packages that depend on cargo or rustc.
I used Gentoo in ancient time when kernel updates took a whole day. A modern computer can rebuild in an hour, a good one, even faster. I’m not a kernel developer, but I don’t think they need to rebuild the whole kernel for every iteration.
And as for Rust, I’m doing bioinformatics in Rust because our iteration time is order of magnitudes longer than a kernel build, and Rust reduced the number of iterations required to reach the final version.
Rust run times are excellent. And statically linked binaries are the superior intellect.
Runtime performance counts for me only some specific cases, and there are many programs I have installed that I recompile because of updates far more frequently than I run them; and when I do run them, rarely use performance an issue.
But you have a good point: performance in the kernel is important, and it is run frequently, so the kernel is a good use case for Rust - where Go, perhaps, isn’t. My original comment, though, was that Zig appears to have many of the safety benefits of Rust, but vastly better compile times.
I really do need to write some Zig projects, because I sound like an advocate when really my opinions are uninformed. I have written Rust, though, and obviously have opinions about it, and especially how it is affecting my system update times.
I’ll keep ripgrep, regardless of compile times. Probably fd, too.
It is easier to safely optimize Rust than C, but that was not the point. The point was on correctness of code.
It is not unheard of for code to run for weeks and months. I need the code to be as bug free as possible. For example, when converting one of our tools to Rust we found out a bug that will lead to the wrong results on big samples. It was found by the Rust compiler! Our tests didn’t cover the bug because it will only happen on very big sample. We can’t create a test file of hundreds of GB by hand and calculate the expected result. Our real data would have triggered the bug. So without moving to Rust we would have gotten the wrong results.
So, a couple of thoughts. You can absolutely write safe code that produces wrong results. Rust doesn’t help - at all - with correctness. Even Rustaceans will agree on that point.
I agree that Rust is safer than C; my point is that if correctness and safeness is the deciding criteria, then why not use Haskell? Or Ada? Both are more “safe” even than Rust, and if you’re concerned about correctness, Haskell is a “provable” language, and there are even tools for performing correctness analysis on Haskell code.
But those languages are not allowed in the kernel, and - indeed - they’re not particularly popular; certainly not in comparison to C, Go, or Rust. There are other factors than just safety and correctness; otherwise, something like OCaml would probably be a dominant language right now.
We didn’t get similar run times with Haskell.
Rust let us abstract even file types (path to a fastq file, fasta file, annotations, etc) with no run time costs. This eliminate many bugs at compile time.
You may say that we can get it in C too, and you will be correct. But in C we spend our time on herding pointers. Research is given X money for N months (sort of), so we have time constraints on development time.
If we do bit wise work, the compiler tests our base types.
Not to mention multithreading just works. Even big projects like BLAST had bugs that led to wrong results due C/CPP horrible multithreading. We encountered two more tools that had similar bugs.
I think that if someone ever does a meta-studies of research code written in C it may get papers retracted.
Have you compared the compile times for equivalent kinds of drivers in the Linux kernel?
No, but just for you I spent time today extracting a list of ~250 packages installed from source on my computer, and tomorrow, I’m going to clean re-install all of them, timed, and post the results.
There’s a mix of languages in there, and many packages have multiple language dependencies, but I’m going by the “Make Deps” package requirements and will post them.
There will probably be too many variables for a clean comparison, but I know I have things like multiple CSV and json CLI toolkits in different languages installed, so some extrapolations should be possible.
C is hard, because a lot of packages that must depend on gcc don’t include it in the make dependencies; they must assume everyone has at least one C compiler installed. A couple of packages explicitly depend on clang, so I’ll have that at least.
Make a YouTube on it and I’ll watch it. I’m not a coder though. But benchmarking and debunking is interesting. Either way it goes. Clear or complex the results come out it’ll be interesting.
Honestly sounds great! Look forward to the results. I do think Linux compile times matters personally, and the time save on development because the compiler is doing checks as well isn’t a perfect one to one for this project, because people like myself compile the kernel way more than we dev for kernel. Adding and removing stuff to trim it down for various platforms.
During your compiling it would be interesting if you can find some rust flags that might disable checks to speed things up. Maybe there is a conf that skips the things downstream users can assume the actual devs ran?
CC @[email protected]
K, I spent way more time on this than I wanted to, but it’s here. There’s a lot to read, but here’s the graph:
The readme explains everything I did to try to make it reasonably fair.
Note that the graph X-axis is logarithmic; the median compile time for Rust packages is an order of magnitude more than C or Go.
The sample size is pretty close to 50 packages in each language, and I made my best attempt to ensure each package included used only one compiled language. Without a lot more work, there wasn’t much more I could do to get an apples-to-apples comparison.
One thing to note is that I downloaded all package sources outside of the timing step. Rust, specially with cargo packages, downloads many dependencies in the
build()
phase, whereas with C they’re mostly already downloaded. So a significant amount of Rust build time it’s actually downloading and compiling dependencies, which it has to do for each virgin build. Whether that make this an unfair comparison is debatable; I will point out that Go, however, does exactly the same thing: library dependencies are downloaded and compiled at build time, same as Rust. This makes the Go median even more impressive, but has no bearing on the Rust v. C discussion.A final note: entirely unintentionally, I apparently have no from-source Zig programs installed (via AUR). I don’t know what to make of that. Is it really that far behind in popularity?
Anyway, all of the source and laborious explanation is there; if you’re running Arch, you could perform the same analysis and most of the work is already done for you. You just need 4 pieces of non-standard software, two of which are probably regardless installed on your machine. Be aware, however: on my desktop it took 12 hours to re-download and clean build the 276 qualifying AUR packages in my system, so it’s a long metric to run.
Wow I read through the blog post and though I’m not a developer I’ve compiled and built Linux packages and operating systems in the past so now I want to fly home and give your script a go myself.
I enjoyed your write up. I can’t comment on programming, but I enjoy a good journey and story.
My final takeaway is your image. I’ll keep it in mind. Interesting!
You read through all that? Wow. Good on you! Even I didn’t re-read it, so there are probably typos all over.
Yeah, the code isn’t interesting. It’s just a bunch of zsh hacked together; I wouldn’t be surprised if you encounter issues running it. The only thing I’m pretty sure of is that it won’t break anything.
Good luck. If you do run it and get a graph, please post it. I’m interested to see results from other systems. Note that the script generates an svg, so you’ll need to convert it to png to post it, or just go on and edit the csvtk graph command and change the svg suffix to png and it’ll create a png for you.
Also, I meant to do this in the README: a huge shout-out to the author of csvtk. It’s a fantastic tool, and I only just discovered the
graph
command which does so much. It has a built-in, simple pivot table function (agroup
argument) that replaced a whole other tool and process step. Seriously nice piece of software for working with CSV.I’m not not doing this; I wanted to spend my free time playing Factorio the next day, and I just haven’t gotten back to scripting and running this.
I’m going to do it and post the results; it’s just taking me a little longer to get to it than I expected.