Except “mass” is not useful by itself. It’s not a chair factory where more people equals faster delivery, just like 9 women won’t deliver a baby in a month. I wish companies understood this.
Except “mass” is not useful by itself. It’s not a chair factory where more people equals faster delivery, just like 9 women won’t deliver a baby in a month. I wish companies understood this.
I think the answer to this is lack of adoption.
Ok, but the comment thread is about people preferring Bluesky to Mastodon, hence my confusion.
Isn’t the format literally just Twitter?
Are you complaining that older versions of Java don’t have the features of newer versions of Java…?
For me, as primarily a backend dev, the argument was that it’s a framework, unlike React, so you get an everything-in-one solution which is quite easy to setup and use.
Given that Google still hasn’t killed this one yet, it’s also a mature platform with plenty of articles online on how to use it.
IIRC the license was also better than React’s, at least last time I checked.
Not sure on what the landscape looks like today, but when I was making the choice, the internet didn’t seem to consider other solutions to be competitive with either React or Angular.
FYI there’s a fully playable unofficial port for Jak 1 and 2, and they’re working on the 3rd one: https://opengoal.dev/
In my experience LLMs do absolutely terribly with writing unit tests.
IMO this perspective that we’re all just “reimplementing basic CRUD” applications is the reason why so many software projects fail.
How do abstractions help with that? Can you tell, from the symptoms, which “level of abstraction” contains the bug? Or do you need to read through all six (or however many) “levels”, across multiple modules and functions, to find the error?
I usually start from the lowest abstraction, where the stack trace points me and don’t need to look at the rest, because my code is written well.
It’s only as incomprehensible as you make it.
If there are 6 subfunctions, that means there’s 6 levels of abstraction (assuming the method extraction was not done blindly), which further suggests that maybe they should actually be part of a different class (or classes). Why would you be interested in 6 levels of abstraction at once?
But we’re arguing hypotheticals here. Of course you can make the method implementations a complete mess, the book cannot guarantee that the person applying the principles used their brain, as well.
You’re nitpicking.
As it happens, it’s just an example to illustrate specifically the “extract to method” issues the author had.
Of course, in a real world scenario we want to limit mutating state, so it’s likely this method would return a Commission
list, which would then be used by a Use Case class which persists it.
I’m fairly sure the advice about limiting mutating state is also in the book, though.
At the same time, you’re likely going to have a void somewhere, because some use cases are only about mutatimg something (e.g. changing something in the database).
It makes me sad to see people upvote this.
Robert Martin’s “Clean Code” is an incredibly useful book which helps write code that Fits In Your Head, and, so far, is the closest to making your code look like instructions for an AI instead of random incantations directed at an elder being.
The principle that the author of this article argues against seems to be the very principle which helps abstract away the logic which is not necessary to understand the method.
public void calculateCommissions() {
calculateDefaultCommissions();
if(hasExtraCommissions()) {
calculateExtraCommissions();
}
}
Tells me all I need to know about what the method does - it calculates default commissions, and, if there are extra commissions, it calculates those, too. It doesn’t matter if there’s 30 private methods inside the class because I don’t read the whole class top to bottom.
Instead, I may be interested in how exactly the extra commissions are calculated, in which case I will go one level down, to the calculateExtraCommissions()
method.
From a decade of experience I can say that applying clean code principles results in code which is easier to work with and more robust.
Edit:
To be clear, I am not condoning the use of global state that is present in some examples in the book, or even speaking of the objective quality of some of the examples. However, the author of the article is throwing a very valuable baby with the bathwater, as the actual advice given in the book is great.
I suppose that is par for the course, though, as the aforementioned author seems to disagree with the usefulness of TDD, claiming it’s not always possible…
You seem to think that “open source” is just about the license and that a project is open source if you’re allowed to reverse engineer it.
You have a gross misunderstanding of what OSS is, which contradicts even the Wikipedia definition, and are unwilling to educate yourself about it.
You suggest that Mistral would need to lend us their GPUs to fit the widely accepted definition of OSS, which is untrue.
You’re either not a software engineer, or you have an agenda.
Because of this, I will not be continuing this conversation with you, as at this point it is just a waste of my time.
You’re, hopefully not on purpose, misunderstanding the argument.
You can download a binary of Adobe Photoshop and run it. That doesn’t make it open source.
I cannot make Mistral Nemo from just the open-sourced tools, therefore Mistral Nemo is not open source.
But then it’s the tools to make the AI that are open source, not the model itself.
I think that we can’t have a useful discussion on this if we don’t distinguish between the source code of the training framework and the “source code” of the model itself, which is the training data set. E.g, Mistral Nemo can’t be considered open source, because there is no Mistral Nemo without the training data set.
It’s like with your Doom example - the Doom engine is open source, but Doom itself isn’t. Unfortunately, here the analogy falls apart a bit, because there is no logic in the art assets of doom, whereas there is plenty of logic in the dataset for Mistral - enough that the devs said they don’t want to disclose it for fear of competition.
This data set logic - incredibly valuable and important for the behavior of the AI, as confirmed by the devs - is why the model is not open source, even though the training framework might be.
Edit:
Another aspect is the spirit of open-source. One of the benefits of OSS is you can study the source code to determine whether the software is in compliance with various regulations - you can audit that software.
How can we audit Mistral Nemo? How can we confirm that it doesn’t utilize copyrighted material to provide its answers?
We’ll have to agree to disagree on pretty much everything, then.
You’re trying to change the definition of open source for AI models and your argument is that they’re magic so different rules should apply.
No, they’re not fundamentally different from other software. Not by that much.
The training data is the source of knowledge for the AI model. The tools to train the model are the compiler for that AI model. What makes an AI model different from another is both the source of knowledge and the compiler of that knowledge.
AFAIK, only one of those things is open source for Mistral - the compiler of knowledge.
You can make an argument that tools to make Mistral models are open source. You cannot make an argument that the model Mistral Nemo is open source, as what makes it specifically that model is the compiler and the training data used, and one of those is unavailable.
Therefore, I can agree on the social network analogy if we’re talking about whether the tools to make Mistral models are open-source. I cannot agree if we’re talking about the models themselves, which is what everyone’s interested in when talking about AI.
That’s not creepy or weird, that’s horrifying.