• RoboticMask@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Tbh, most websites I visit probably aren’t that important that they need to be archived. I would assume installing this extension would just contribute to a bloated archive with little additional value.

    • Merulox@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      From my understanding, the Internet Archive’s goal (granted that they manage to get through their current dilemma) is to archive the whole Internet, I wouldn’t consider it to be bloat.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    My desire to help archive.org preserve the Internet is conflicting with my desire not to have anybody tracking my browsing habits.

  • JohnEdwa@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I believe the automatic archiving has to be manually enabled by going to the settings, selecting the general tab and enabling “Auto save page”.

  • iRyu@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Is this just for personal purposes or would I be contributing to the internet archive?

    • Merulox@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Purely to contribute to the internet archive. There’s the “save page now” button that’ll save the webpage to your account, but the automatic downloader that I refer to in the post doesn’t do that and is purely for the sake of expanding the archive.

      • LoafyLemon@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        How safe is it to have automatic page saving enabled when using websites that require an account? Can it potentially dox oneself?

        • merulox@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          it doesn’t literally scrape the web page off your browser, it just sends a download request to the internet archive to download the webpage associated with the URL that you’re visiting

          -replying from kbin because lemmy just wouldn’t let me reply… I’d been trying for over 20 minutes.

          • LoafyLemon@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I wasn’t sure if it was using the client to capture the snapshot or not, better safe than sorry! Thanks for explaining.