Hello together. I try to use a systemd path unit, to monitor a directory structure. But as of now, I was only successful for the top level directory. The unit should be triggered, if a new file is written to either the top level of the monitored directory and also, if there is a new file in any of its subdirectories. I don’t know how to do that. Any ideas?

Additionally, the triggered service unit should be delayed for some time. Background is, that I automatically upload sometimes more than one file in a batch. So I will give the script triggered by the service unit the chance, to wait until the upload of all files is finished, so that I can work with all the new files with one script call, instead of multiple calls for every file. Is that possible?

  • thelastknowngod@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    11 months ago

    Are you sure this is the most efficient way to accomplish the goal?

    Resilio Sync will just sync files to different locations automatically… You don’t need to worry about firewall rules or DNS/IP addresses doing this either…

    • sudo_su@feddit.deOP
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Since the script I’m talking about, makes some changes to the synced files, this is not a job for Resilio Sync. For the sync job itself, I’m using SFTP, because this is the easiest to setup on all clients/platforms. I’m only interested how I could safely dedect, if the sync is finished and start the script to do it’s job’s. The tip with the changing file is nice. I’m using that for now. Absolute reliable so far, for this task.

  • bahmanm@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    11 months ago

    I don’t think you’ll be able to achieve that with systemd paths, I’m afraid. It’s not a use-case it is designed for.

    It’s hard to come up with a suggestion without knowing more about the depth of the directory and the number of nodes in each level. But you could try updating a dummy file such as latest_timestamp in the top-level directory (which a systemd path can monitor BTW) and let the service unit be triggered by that.

    • sudo_su@feddit.deOP
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      The depth is changing constantly, because new subdirs are created and removed during the day and/or upload/sync process. Thats why the script is walking across the complete directory structure every time. But the dummy file is a nice suggestion. In this case, I can monitor only the dummy file and trigger the script on dummy file change. Good idea.