I support free and open source software (FOSS) like VLC, Qbittorrent, LibreOffice, Gimp…
But why do people say that it’s as secure or more secure than closed source software?
From what I understand, closed source software don’t disclose their code.
If you want to see the source code of Photoshop, you actually need to work for Adobe. Otherwise, you need to be some kind of freaking retro-engineering expert.
But open source has their code available to the entire world on websites like Github or Gitlab.
Isn’t that actually also helping hackers?
The idea you’re getting at is ‘security by obscurity’, which in general is not well regarded. Having secret code does not imply you have secure code.
But I think you’re right on a broader level, that people get too comfortable assuming that something is open source, therefore it’s safe.
In theory you can go look at the code for the foss you use. In practice, most of us assume someone has, and we just click download or tell the package manager to install. The old adage is “With enough eyes, all bugs are shallow”. And I think that probably holds, but the problem is many of the eyes aren’t looking at anything. Having the right to view the source code doesn’t imply enough people are, or even meaningfully can. (And I’m as guilty of being lax and incapable as anyone, not looking down my nose here.)
In practice, when security flaws are found in oss, word travels pretty fast. But I’m sure more are out there than we realize.
It’s also easier to share vulnerability fixes between different projects.
“Y” was using a similar memory management as “T”, T was hacked due to whatever, people that use Y and T report to Y that a similar vulnerability might be exploitable
Edit:
In closed source, this might happen if both projects are under the same company.
But users will never have the ability to tell Y that T was hacked in a way that might affect Y
Zero day exploits, aka vulnerabilities that aren’t publicly known, offer hackers the ability to essentially rob people blind.
Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities. So while it’s not inherently more secure, it is in practice.
Exploiting four zero-day flaws in the systems,[8] Stuxnet functions by targeting machines using the Microsoft Windows operating system and networks, then seeking out Siemens Step7 software. Stuxnet reportedly compromised Iranian PLCs, collecting information on industrial systems and causing the fast-spinning centrifuges to tear themselves apart.[3] Stuxnet’s design and architecture are not domain-specific and it could be tailored as a platform for attacking modern SCADA and PLC systems (e.g., in factory assembly lines or power plants), most of which are in Europe, Japan and the United States.[9] Stuxnet reportedly destroyed almost one-fifth of Iran’s nuclear centrifuges.[10] Targeting industrial control systems, the worm infected over 200,000 computers and caused 1,000 machines to physically degrade.
Stuxnet has three modules: a worm that executes all routines related to the main payload of the attack, a link file that automatically executes the propagated copies of the worm and a rootkit component responsible for hiding all malicious files and processes to prevent detection of Stuxnet.
“Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities.”
Heartbleed has entered the chat
The whole Stuxnet story is fascinating. A virus designed to spread to the whole Internet, and then activate inside a specific Iranian facility. Convinced me that we already live in a cyberpunk world.
Because more eyes spot more bugs, supposedly. I believe it, running closed source software is truly insane
It’s not “assumed to be secure.” The source code being publicly available means you (or anyone else) can audit that code for vulnerabilities. The publicly available issue tracking and change tracking means you can look through bug reports and see if anyone else has found vulnerabilities and you can, through the change history and the bug report history, see how the devs responded to issues in the past, how they fixed it, and whether or not they take security seriously.
Open source software is not assumed to be more secure, but it’s security (or lack thereof) is much easier to verify, you don’t have to take the word of the dev as to whether or not it is secure, and (especially for the more popular projects like the ones you listed) you have thousands of people with different backgrounds and varying specialties within programming, with no affiliation with and no reason to trust the project doing independent audits of the code.
Ape alone… weak. Apes together… strong.
Now I’ve got an image in my head of apes sitting around in the jungle using laptops
Fixing back door exploits multiple code repositories 😂
GibbonHub
Somewhat of a different take from what I’ve seen from the other comments. In my opinion, the main reason is this:

Companies have basically two reasons to do safety/security: Brand image and legal regulations.
And they have a reason to not do safety/security: Cost pressure.Now imagine a field where there’s hardly any regulations and you don’t really stand out when you do security badly. Then the cost pressure means you just won’t do much security.
That’s the software engineering field.
Now compare that to open-source. I’d argue a solid chunk of its good reputation is from hobby projects, where people have no cost pressure and can therefore take all the time to do security justice.
In particular, you need to remember that most security vulnerabilities are just regular bugs that happen to be exploitable. I have significantly fewer bugs in my hobby projects than in the commercial projects I work on, because there’s no pressure to meet deadlines.And frankly, the brand image applies even to open-source. I will write shitty code, if you pay me to. But if my name is published along with it, you need to pay me significantly more. So, even if it is a commercial project that happens to be published under an open-source license, I will not accept as many compromises to meet deadlines.
One thing to keep in mind is that NO CODE is believed to be secure…regardless of open source or closed source. The difference is that a lot of folk can audit open source whereas we all have to take the word of private companies who are constantly reducing headcount and replacing devs with AI when it comes to closed source.
Its relatively easy. First of all if someone would implement a backdoor its much easier to find out, since you can look at the code directly. Second is, that a lot of people actually do this. Looking at the code of projects and searching for ways to find security holes in it.
So even if it isn’t that much more secure than closed source, its much easier to trust simply because people can search for vulnerabilities much easier.
One great example of why open source code is easier to realise backdoors would be the xz Security breach.
Assumed by who?
You live in some Detroit-like hellscape where everyone everywhere 24/7 wants to kill and eat you and your family. You go shopping for a deadbolt for your front door, and encounter two locksmiths:
Locksmith #1 says “I have invented my own kind of lock. I haven’t told anyone how it works, the lock picking community doesn’t know shit about this lock. It is a carefully guarded secret, only I am allowed to know the secret recipe of how this lock works.”
Locksmith #2 says "Okay so the best lock we’ve got was designed in the 1980’s, the design is well known, the blueprints are publicly available, the locksport and various bad guy communities have had these locks for decades, and the few attacks that they made work were fixed by the manufacturer so they don’t work anymore. Nobody has demonstrated a successful attack on the current revision of this lock in the last 16 years.
Which lock are you going to buy?
Or just, you know, move out of Detroit… ¯\_(ツ)_/¯
To keep that metaphor going, if you are online, you are in Detroit.

You’ve reminded me of global chat in every F2P game I’ve played
I hear the real estate in Flint is affordable.
Really? I hear it’s a steel.
It helps hackers sure, but it also help the community in general also vet the overall quality of the software and tell the others to not use it. When it’s closed source you have no choice but to trust the company behind it.
There’s several FOSS apps I’ve encountered, looked at the code and passed on it because it’s horrible. Someone will inevitably write a blog post about how bad the code is warning people to not use the project.
That said, the code being public for everyone to see also inherently puts a bit of pressure to write good code because the community will roast you if it’s bad. And FOSS projects are usually either backed by a company or individuals with a passion: the former there’s the incentive of having a good image because no company wants to expose themselves cutting corners publicly, and the passion project is well, passion driven so usually also written reasonably well too.
But the key point really is, as a user you have the option to look at it and make your own judgement, and take measures to protect yourself if you must run it.
Most closed source projects are vulnerable because of pressure to deliver fast, and nobody will know until it gets exploited. This leads to really bad code that piles up over time. Try to sneak some bullshit into the Linux kernel and there will be dozens of news article and YouTube videos about Linus’ latest rant about the guilty. That doesn’t happen in private projects, you get a lgtm because the sprint is ending and sales already sold the feature to a customer next week.
If I can see the code, I can see if said code is doing something fucky. If I can’t see the code, I have to just have faith that it’s not doing something fucky.
You theoretically can see the code. You don’t actually look at it. Nor can you even have the knowledge to understand and see security implications for all the software you use.
In practice it makes little difference for security if you use open or closed source software.
No, you literally can see the code, that’s why it’s open source. YOU may not look at it, but people do. Random people, complete strangers, unpaid and un-vested in the project. The alternative is a company, who pays people to say “Yeah it’s totally safe”. That conflict of interest is problematic. Also, depending on what it’s written in, yes, I do sometimes take the time. Perhaps not for every single thing I run, but any time I run across niche projects, I read first. To claim that someone can’t understand is wild. That’s a stranger on the internet, you’re knowledge of their expertise is 0.
In practice, 1,000 random people with no reason to “trust you, bro” on the internet being able to audit every change you make to your code is far more trustworthy than a handful of people paid by the company they represent. What’s worse, is that if Microsoft were to have a breach, then like maybe 10 people on the planet know about it. 10 people with jobs, mortgages, and families tied to that knowledge. They won’t say shit, because they can’t lose that paycheck. Compare that to say the XZ backdoor where the source is available and gets announced so people know exactly who what and where to resolve the issue.
How did Heartbleed and Goto fail happen in OpenSSL then?
With open source code you get more eyes on it. Issues get fixed quicker.
With closed source, such as Photoshop, only Adobe can see the code. Maybe there are issues there that could be fixed. Most large companies have a financial interest in having “good enough” security.
It’s not “assumed” to be secure.
It’s out there and visible for all to see. Hopefully, someone knowledgeable has taken it upon themselves to take a look at the software and assess its security.
The largest projects, like all the ones you named are popular enough that there’s no shortage of people taking a peek.
Of course, that doesn’t mean actual security audits are uncalled for. They’re necessary. And they’re being done. And with the code out there, any credible auditer will audit all the code, since it’s availiable.
Compare that to closed-source.
With closed-source, the code isn’t out there. Anyone can poke around, sure, but that’s like poking a black box with a stick. It’s not out there. You can infer some things, there are some source code leaks, but it isn’t all visible. This is also much less efficient and requires much more work for a fraction of the results.
The same goes with actual audits. Usually not all source code is given over to the auditers, so some voulnerabilities remain uninspected and dormant.
Sure, not having the code out there is “security”. If someone doesn’t see the code, it’s much harder to find the weakness. Harder, but not impossible.
There’s a lot of open-source software. There’s also a lot closed-source software, much more than the open-source kind, in fact.
What open-sourcing does is increase the number of eyes looking at the code. And each of those eyes could find a weakness. It might be a bad actor, but it’s most likely a good one.
With open source, any changes are publically visible, and any attempt to sneak a backdoor in has a much higher chance of being seen, again due to the large number of eyes which can see it.
Closed-source code also gives lazy programmers an easy way out of fixing or not introducing vulnerabilities - “no one will know”. With open source, again, there’s a lot of eyes on the code - not just the one programmer team making it and the other auditing it, as is often the case.
That’s why open source software is safer in general. Percisely because it’s availiable, attacking it might seem easier. But for every bad actor looking at the code, there’s at least ten people who aren’t. And if they spotted a voulnerability, they’d report it.
Security with open source is almost always proactive, while with closed source it’s hit-or-miss. Many voulnerabilities have to cause an issue before being fixed.
Helping hackers is the whole point. They can read the source code and report problems with the software.








