• 9 Posts
  • 208 Comments
Joined 1 year ago
cake
Cake day: August 10th, 2023

help-circle
  • And before you start whining - again - about how you are fixing bugs, let me remind you about the build failures you had on big-endian machines because your patches had gotten ZERO testing outside your tree.

    As far as I know, the Linux Foundation does not provide testing infrastructure to it’s developers. Instead, corporations are expected to use their massive amount of resources to test patches across a variety of cases before contributing them.

    Yes, I think Kent is in the wrong here. Yes, I think Kent should find a sponsor or something to help him with testing and making his development more stable (stable in the sense of fewer changes over time, rather than stable as in reliable).

    But, I kinda dislike how the Linux Foundation has a sort of… corporate centric development. It results in frictions with individual developers, as shown here.

    Over all of the people Linus has chewed out over the years, I always wonder how many of them were independent developers with few resources trying to figure things out on their own. I’ve always considered trying to learn to contribute, but the Linux kernel is massive. Combined with the programming pieces I would have to learn, as well as the infrastructure and ecosystem (mailing list, patch system, etc), it feels like it would be really infeasible to get into without some kind of mentor or dedicated teacher.


  • So I don’t know how much you know about the shell, but the way that the linux command line works is that there are a set of variables, called environment variables, which dictate so me behavior of the shell. For example, $PATH variable, refers to what directories to search through, when you try to execute a program in your shell.

    The documentation you linked, wants you to create a custom shell variable, called SCALE_PATH, consisting of a folder path, which contains the compiled binaries/programs of scale you want to run.

    This command: export PATH="${SCALE_PATH}/bin:$PATH"

    temporarily edits your PATH variable to add that folder with the scale programs you want to run to your path, enabling you to execute them from your shell.


  • Thorium’s entire focus is on performance. As another commenter has noted, that means no security updates, and no privacy features.

    I wouldn’t recommend it for daily use, but if you are playing a browser based game it’s worth testing out. I used to play krunker.io and I tested it to see if I could get more FPS (FPS equaled faster movement speed back then), but I didn’t see any major performance improvements over the major krunker clients or Microsoft Edge (other most performant browser).





  • I cannot find anything related to that in their documentation, their about page, or their whitepaper.

    They talk a lot about decentralized computing, but any form of secure enclave or code verification isn’t mentioned.

    Compare that to this project, which is similar, but incomplete. However, quilibrium uses it’s own language instead of python or javascript, like golem does. The docs for golem do not explain how I am supposed to verify a remote server is actually running my python/javascript code.




  • There is concern amongst critics that it will not always be possible to examine the hardware components on which Trusted Computing relies, the Trusted Platform Module, which is the ultimate hardware system where the core ‘root’ of trust in the platform has to reside.[10] If not implemented correctly, it presents a security risk to overall platform integrity and protected data

    https://en.m.wikipedia.org/wiki/Trusted_Computing

    Literally all TPM’s are proprietary. It’s basically a permanent, unauditable backdoor, that has had numerous issues, like this one (software), or this one (hardware).

    We should move away from them, and other proprietary backdoors that deny users control over there own system, rather than towards them, and instead design apps that don’t need to trust the server, like end to end encryption.

    Also: if software is APGL then they are legally required to give you the source code, behind the server software. Of course, they could just lie, but the problem of ensuring that a server runs certain software also has a legal solution.



    Crowdstrike didn’t target anyone either. Yet, a mistake in code that privileged, resulted in massive outages. Intel ME runs at even higher privileges, in even more devices.

    I am opposed to stuff like kernel level code, exactly for that reason. Mistakes can be just as harmful as malice, but both are parts of human nature. The software we design should protect us from ourselves, not expose us to more risk.

    There is no such thing as a back door that “good guys” can access, but the bad guys cannot. Intel ME is exactly that, a permanent back door into basically every system. A hack of ME would take down basically all cyber infrastructure.



  • Why are you talking about Creative Commons?

    Because (from the article):

    Originally open-source under the General Public License, DuckStation‘s license was changed first to PolyFormStrict License and then to CC-BY-NC-ND. These changes prohibit commercial use and derivatives of the emulator, including packaging it for distribution.

    Yeah. It’s not supposed to be for code. Didn’t stop the Duckstation developer.

    There are plenty of options in licenses in the post-open source, copyfair, copyfarleft, & such that work for software that are not considered “free” or “open” (where open is more corporate than free, which free is obviously the better one) but still allow users to modify read & usually modify the source.

    I would have to evaluate those licenses on a case by case basis, but I suspect I would find the vast majority of them okay enough. But again, this is moving the goalposts. I was expressing my concerns issues with the CC BY NC ND, but you have changed the discussion to be about other licenses. Although interesting, they are not relevant since the DuckStation license is not those.

    I still think government funding for free software is the correct solution, however. I generally find all of the post open and whatnot licenses have restrictions can be problematic, or loopholes that can be abused to get out of the “good” restrictions. I noted a while ago with one of the licenses that demand that corporations making over some amount giving up a percentage of their profits, that Google used to do a scheme where Alphabet (parent company of google) was the actual owner of the google logo, and then they rented it to Google at an absurdly high price, in order to artificially lower Google’s profits. I think that it would be too simple for the extremely wealthy companies to do something similar and use post-open licensed software without consequence.

    Taxing corporations is hard, but having every individual entity behind a software try to extract resources from a corporation will be harder. “Divide and conquer”. My understanding is that license violations are a Civil case, meaning you have to spend money on lawyers and other legal things and… you would be going against some of the richest entities in the world in a court where money is basically a win button.

    And of course, allowing society to continue to rely on proper Free Software licenses, ensures software freedom is preserved.

    usually modify the source.

    No. If I cannot modify the source, then I don’t really view a difference between it and proprietary software. Both the OSI and Free Software Foundation at least require the ability to modify the source code, in order for a license to actually count at FOSS under their guidelines — and I agree with them. Code I cannot modify, is a piece of my computer I do not own.


  • Some of these license are very clear about what is commericial

    The license chosen in this article is the Creative Commons license, which is not a code license, but instead one intended for art. On their own page, they acknowledge the difficulty with categorizing commercial vs non-commercial usecases:

    In CC’s experience, it is usually relatively easy to determine whether a use is permitted, and known conflicts are relatively few considering the popularity of the NC licenses. However, there will always be uses that are challenging to categorize as commercial or noncommercial. CC cannot advise you on what is and is not commercial use. If you are unsure, you should either contact the rights holder for clarification, or search for works that permit commercial uses.

    What’s wild is the banshees here rarely acknowledge how AGPL works similar to these now adding restrictions instead of laying out what you can do, but daddy OSI approved it so it must be good.

    1. “You must share source code of this service with your users” is not really an actual restriction on who can use the software and who can use it.

    2. Fuck the OSI. They’ve done more harm to free software than any other organization. In the recent controversy with redis and SSPL, they refused to acknowledge the actual problem of the SSPL license, that it was unusable due to requiring all “software used to deploy this software” being open source. Does that mean that people who deploy software on Windows have to cough up the source code for Windows? What about Intel Management Engine, the proprietary bit of code in every single Intel CPU. Redis moved to a dual license with that a proprietary license. An unusable license… and a proprietary license = proprietary software. But instead, the OSI whined that the problems with the SSPL was that it would “restrict usage” because people have to share more source code. The OSI, and open source, have always been corporate entities that unsurp free software. Just look at their sponsors page and see who supports them: Amazon, Google, Intel, Microsoft…

    The goal is often to help workers & the commons—say you as an individual are free to use it for, or others for places where folks have equal pay or say, or less than 10 seats. To say that since a software license says Amazon can’t use this but you can means it’s all proprietary means you are either Amazon or a goober to think these are equivalent. Something something baby out with the water fallacy

    You are moving the goalposts. I argued against a license that restricts derivatives and commercial use. You are now defending licenses that target specific entities and seek to remain open to workers and the commons. A license that restricts derivatives is not this.

    To be blunt, I would be okay with a license that specifically restricts retroarch devs from making derivatives, and I would find it funny af. I think that was what the Duckstation dev was going for with the noncommercial and no derivatives (since retroarch maintains forks of software in order to add it as cores), but I’m frustrated at what is essentially a shift to a proprietary license instead.

    Although such a hypothetical license that targets the retroarch developers would not be approved by the OSI or the Free Software institutions, I don’t really care. Racists don’t get rights.


  • No, these licenses are problematic. Fundamentally, it is proprietary software, and restricts me from full ownership and control over my computer.

    No derivatives prevents me from modifying the program and maintaining the control I am owed to have over my device. Every bit of proprietary code is a percentage of my computer that is no longer truly mine.

    No commercial usage is a continium fallacy. Is my blog commercial, because I advertise my resume on it? Is retroarch* commercial, because they have a patreon and get paid? Are “nonprofits” not commercial, since they claim to not want to make a profit? Or are only registered businesses commercial?

    The correct solution to maintain softare freedom is for governments to extract money from the entities that profit the most off of free software, and use those taxes to fund free software. Germany is kind of doing this with their sovreign tech fund.

    *Fuck the retroarch devs btw. Did a little digging, they seem to have been very problematic, and ran multiple harassment campaigns.



  • Because forgejo’s ssh isn’t for a normal ssh service, but rather so that users can access git over ssh.

    Now technically, a bastion should work, but it’s not really what people want when they are trying to set up git over ssh. Since git/ssh is a service, rather than an administrative tool, why shouldn’t it be configured within the other tools used for exposes services? (Reverse proxy/caddy).

    And in addition to that, people most probably want git/ssh to be available publicly, which a bastion host doesn’t do.


  • So, I’m not gonna pretend flatpak doesn’t use more space then normal apps, but due to deduplication (and sometimes filesystem compression), flatpaks often use less space than people think.

    [nix-shell:~/Playables/chronosphere]$ sudo /nix/store/xdrhfj0c64pzn7gf33axlyjnizyq727v-compsize-1.5/bin/compsize -x /var/lib/flatpak/
    Processed 49225 files, 21778 regular extents (46533 refs), 22188 inline.
    Type       Perc     Disk Usage   Uncompressed Referenced
    TOTAL       53%      898M         1.6G         3.6G
    none       100%      499M         499M         1.0G
    zstd        34%      399M         1.1G         2.6G
    
    [nix-shell:~/Playables/chronosphere]$ du -sh /var/lib/flatpak/
    1.7G    /var/lib/flatpak/
    

    I only have one flatpak app installed, and du says that takes up 1.7 GB of space… but actually, when using a tool that takes up BTRFS transparent compression into account, only half of that space is used on my disk.

    I recommend using compsize for a BTRFS compression aware version of du and flatpak-dedup-checker for a flatpak filesystem deduplication aware checker of space used.

    I think flatpak absolutely does use up more space, because yes, it is another linux distro in your distro. But I think that’s a tradeoff people accept in order to have a universal package manager for graphical apps.

    Also, you can flatpak cli tools. They are just difficult to run at first because you have to do the flatpak run org.orgname.appname thing, but you can alias that to a short command. Here is a flatpak of micro, a terminal based text editor.

    (I prefer nix for cli tools though, and docker/podman/containers for services).


  • So based on what you’ve said in the comments, I am guessing you are managing all your users with Nixos, in the Nixos config, and want to share these users to other services?

    Yeah, I don’t even know sharing Unix users is possible. EDIT: It seems to be based on comments below.

    But what I do know is possible, is for Unix/Linux to get it’s users from LDAP. Even sudo is able to read from LDAP, and use LDAP groups to authorize users as being able to sudo.

    Setting these up on Nixos is trivial. You can use the users.ldap set of options on Nixos to configure authentication against an external LDAP user. Then, you can configure sudo

    After all of that, you could declaratively configure an LDAP server using Nixos, including setting up users. For example, it looks like you can configure users and groups fro the kanidm ldap server

    Or you could have a config file for the openldap server

    RE: Manage auth at the reverse proxy: If you use Authentik as your LDAP server, it can reverse proxy services and auth users at that step. A common setup I’ve seen is to run another reverse proxy in front of authentik, and then just point that reverse proxy at authentik, and then use authentik to reverse proxy just the services you want behind a login page.