This is a pretty specific issue, I’ll do my best to explain.
When archiving jobs, if the files already exist in the archive directory the whole operation will fail. When this happens it throws an error and stops archiving immediately. If you had selected multiple jobs to archive, some of them may have already been copied to the archive directory but will now remain in the queue – this causes the problem to get worse, as you now have even more duplicate jobs in the active and archive directories.
For example, there are jobs A, B, C, and D. Somehow, C has ended up in both the archive and active directories. If you select all jobs and try to archive, then A and B will be copied to the archive, C will cause an error, and the operation will stop. Now A, B, and C are in both the active and archive directories. Unfortunately, I’m not sure how we ended up with a duplicate job in the first place… But it would only take one to snowball into a bigger issue.
I think this could be solved by archiving the jobs one-by-one, removing each from the queue as soon as it is sucessfully archived. Even better, only raise the error for the specific jobs that had an issue and provide options for dealing with the conflict (overwrite, skip, etc).