The malicious changes were submitted by JiaT75, one of the two main xz Utils developers with years of contributions to the project.

“Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system,” an official with distributor OpenWall wrote in an advisory. “Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the ‘fixes’” provided in recent updates. Those updates and fixes can be found here, here, here, and here.

On Thursday, someone using the developer’s name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.

“This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned, from an account that was created the same day.

One of maintainers for Fedora said Friday that the same developer approached them in recent weeks to ask that Fedora 40, a beta release, incorporate one of the backdoored utility versions.

“We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said.

He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise.

  • jj4211@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    3 months ago

    It’s maybe possible, but perhaps even unlikely still.

    Overwhelmingly thorough security review is time consuming and expensive. It’s also not perfect, as evidenced by just how many security issues accidentally live long enough to land Even in enterprise releases. That’s even without a bad actor trying to obfuscate the changes. I think this general approach had several aspects that would made it likely to pass scrutiny:

    • It was in XZ, which was likely not perceived as a security critical library. A security person would recognize any thing as potentially security critical, but they don’t always have the resources and so are directed to focus on obviously security related and historically security incident magnets.
    • it was carried out by someone who spent years building up an innocuous reputation. Investigation may even show previous “test samples” to be malicious but not caught, or else it was a red herring to get people used to random test samples getting placed in the project.
    • The only “source code” he touched was “just build scripts”. Even during a security audit, build shell scripts are likely going to be ignored, they are just build scripts and maybe you run some tests on all scripts, but those tests aren’t going to catch this sort of misbehavior.
    • The actual runtime malicious code was delivered as portions of ostensibly throw away test sample xz files. The malicious code is applied by binary patch of the build output. A security audit won’t be thinking too hard about a sea of binary files that are just throwaway samples as fodder for test.

    So while I see the point about logical fallacy about it accidentally not getting far enough to see if the enterprise release process would have caught it, I think we know track records well enough to deem this approach likely to get through. Now that it has been caught, I could see some changes that may mitigate this in the future. Like package build scripts deleting all test samples and skipping tests when building for release, as well as more broad scrutiny.

    There’s also the reality that a lot of critical applications deem themselves too cool to settle for “old crusty enterprise distributions”. They think that approach is antiquated and living on the edge is better. Admittedly I doubt theyd go as far as arch, tumbleweed, or rawhide, but this one could have easily made it to Debian testing, fedora release, or an Ubuntu release.

    • Cosmic Cleric@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      3 months ago

      I think we know track records well enough to deem this approach likely to get through.

      That was my concern, and why I brought up my point.

      Human nature, especially when volunteer work versus paid work is being done, as well as someone who purposely over the long-term is trying to be devious, could be a potent combination for disaster.

      I still wonder if there should be an actual open source project that does nothing but security audits of all other open source projects, hence my original question as an opener to a conversation that I never got to elaborate on because I was getting attacked almost immediately by people who are very sensitive about bringing any criticisms/concerns about open source out in the open.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 months ago

        The issue is that it implies that open source has a problem due to volunteers that is not found in closed source, which is not really the reality.

        You can look at a closed source vendor like Cisco and see backdoors, generally left over from developer access, yet open for abuse. The nature of those is so blatantly obvious any open review would have spotted it instantly, yet there it was

        With this, you had a much more device obfuscated attack that probably would have passed through even serious security audits unnoticed, yet it was caught because someone was curious about a slight performance degradation and investigated. Having been in the closed source world,I can tell you that they never would have caught someone like this. Anyone even vaguely saying they wanted to spend some time investigating a session startup delay of half a second would be chastised for wasting time.

        Further, open source projects are also the fodder for security researchers to build their resumes. Hard to prove your mettle without works, and catching vulnerabilities in OSS code is a popular feather in their cap.

        It also implies that open source is strictly a volunteer affair. Most commercial applications of a Linux platform involve paid employees doing some enablement, and that differs place to place. There’s of course red hat paying for security research, Google, Microsoft also. I know at least one company that distrusts everything and repeats a whole bunch of security audits, including paying external companies to audit open source code. I would wager that folks downstream of say centos stream or certain embedded platforms can feel pretty good about audits. Of course all bets are off when you go grab yarballs, npm, pip, etc.

        • Cosmic Cleric@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          3 months ago

          The issue is that it implies that open source has a problem due to volunteers that is not found in closed source, which is not really the reality.

          I (partially) disagree. Fundamentally, my belief is that someone who gets paid to do the work is more rigorous doing the work than someone who does it on a volunteer basis, a human nature thing. Granted, I’m speaking very generally, and what I stated is not always true, but still.

          Also, corporations that write close source programs are much more legally adverse to being sued if their product fails (there’s a reason why we’re seeing so many corporations slapping in arbitration clauses into their agreements these days; risk-averse).

          Open source projects tend to just be more careful about their code base not being tainted, and write in disclaimers (“As-is”) to protect themselves legally for the failure of the product scenario, and call it a day (again, very generally speaking (I use Fedora specifically for a reason)).

          And speaking of Fedora, I do agree with your point that some open source projects are actually done by paid coders. I just believe that’s more of the outlier, than the norm, though. Some of that work is done by corporate employees, but still on a volunteer basis.

          Not dismissing at all, I am thankful for corporations that actually spend time letting their employees do open source work, even if it’s just for their own direct benefit, as it also benefits everyone else.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 months ago

            Having worked with closed source, whatever they project externally, internally they are generally lazy and do the bare minimum. If there is a security review, it might just be throwing it at something like bdba that just checks dependencies against cve. Maybe a tool like coverity or similar code analysis. That’s about as far as a moderately careful closed source so goes. It is exceedingly rare for them to fund folks to endlessly fiddle with the code looking for vulnerabilities, and in my experience actively work to rationalize away bugs if possible, rather than allocating time to chasing root cause and fix.

            There may be paragons of good closed source development, but there are certainly bad ones. Same with open source.

            I also think most open source broadly is explicitly employee work nowadays. Not just hobbyist, except for certain niches.

            • Cosmic Cleric@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              internally they are generally lazy and do the bare minimum.

              Day to day, and with a lazy manager who is not technically knowledgeable, I would agree, and they do existence in corporations.

              But if you work for one who knows what they’re doing and gets a mandate from their boss to make sure the code doesn’t leave the corporation legally exposed, then not so much.

              Also special events like Y2K also gets extra scrutiny for legal reasons way up and above the normal level scrutiny thing production code gets.

              I’ve worked it both types throughout my career.

              • jj4211@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                The same argument can be made about open source, some projects are very carefully and festidously managed, and others not so much.

                Main difference is with closed source, it’s hard to know which sort of situation your are dealing with, and no option for an interested third party to come along and fix a problematic project.