A reverse-delta backup strategy – obvious idea or bad idea?
I recently came up with a backup strategy that seems so simple I assume it must already exist — but I haven’t seen it in any mainstream tools.
The idea is:
The latest backup (timestamped) always contains a full copy of the current source state.
Any previous backups are stored as deltas: files that were deleted or modified compared to the next (newer) version.
There are no version numbers — just timestamps. New versions can be inserted naturally.
Each time you back up:
1. Compare the current source with the latest backup.
2. For files that changed or were deleted: move them into a new delta folder (timestamped).
3. For new/changed files: copy them into the latest snapshot folder (only as needed).
4. Optionally rotate old deltas to keep history manageable.
This means:
The latest backup is always a usable full snapshot (fast restore).
Previous versions can be reconstructed by applying reverse deltas.
If the source is intact, the system self-heals: corrupted backups are replaced on the next run.
Only one full copy is needed, like a versioned rsync mirror.
As time goes by, losing old versions is low-impact.
It's user friendly since the latest backup can be browsed through with regular file explorers.
Example:
Initial backup:
latest/ a.txt # "Hello" b.txt # "World"
Next day, a.txt is changed and b.txt is deleted:
latest/ a.txt # "Hi" backup-2024-06-27T14:00:00/ a.txt # "Hello" b.txt # "World"
The newest version is always in latest/, and previous versions can be reconstructed by applying the deltas in reverse.
I'm curious: has this been done before under another name? Are there edge cases I’m overlooking that make it impractical in real-world tools?
Would love your thoughts.
For reference: a comprehensive backup + security plan for individuals https://nau.github.io/triplesec/
Great resource in general, will look into it if it describes how to implement this backup scheme
It works. Already implemented: https://rdiff-backup.net/ https://github.com/rdiff-backup/rdiff-backup
There are also other tools which have implemented reverse incremental backup or backup with reverse deduplication which store the most recent backup in contiguous form and fragment the older backups.
Thank you for bringing this to my attention. Knowing that there is a working product using this approach gives me confidence. I'm working on a simple backup app for my personal/family use, so good to know I'm not heading in the wrong direction
It sounds like this method is I/O intensive as you are writing the complete image at every backup time. Theoretically, it could be problematic when dealing with large backups in terms of speed, hardware longevity, and write errors, and I am not sure how you would recover from such errors without also storing the first image. (Or I might be misunderstanding your idea. It is not my area.)
You can see in step 2 and 3 that no full copy is written every time. It's only move operations to create the delta, and copy of new or changes files, so quite minimal on IO.
I used to work on backup software. Our first version did exactly that. It was a selling point. We later switched approach to a deduplication based one.
Exciting!
Yes, the deduplicated approach is superior, if you can accept requiring dedicated software to read the data or can rely on a file system that supports it (like Unix with hard links).
I'm looking for a cross-platform solution that is simple and can restore files without any app (in case I didn't maintain my app for the next twenty years).
I'm curious if the software you were working on used proprietary format, was relying on Linux, or used some other method of duplication.
The low likelihood / high impact edge case this does not handle is: "Oops, our data center blew up." An extreme scenario, but one that this method does not handle. It instead turns your most recent backup into a single point of failure because you cannot restore from other backups.
This sounds more like a downside of single site backups
What happens if in the process of all this read write rewrite, data is corrupted?
In this algo nothing is rewritten. A diff between source and latest is made, the changed or deleted files archives to a folder and the latest folder updated with source, like r sync. No more IO than any other backup tool. Versions other than the last one are never touched again
It seems like ZFS/Btrfs snapshots would do this.
No, they work the opposite way using copy-on-write.
"For files that changed or were deleted: move them into a new delta folder. For new/changed files: copy them into the latest snapshot folder." is just redneck copy-on-write. It's the same result but less efficient under the hood.
Nice to realize that this boils down to copy on write. Makes it easier to explain.
The more common approach now is incrementals forever with occasional synthetic full backups computed at the storage end. This minimises backup time and data movement.
I agree it seems more common. However back-up time and data movement should be equivalent if you follow the algo steps.
According to chat GPT the forward delta approach is common because it can be implemented purely append only, whereas reverse deltas require the last snapshot to be mutable. This doesn't work well for backup tapes.
Do you also think that the forward delta approach is a mere historical artifact?
Although perhaps backup tapes are still widely used, I have no idea, I am not in this field. If so the reverse delta approach would not work in industrial settings.
Nobody[1] backs up directly to tape any more. It’s typically SSD to cheap disk with a copy to tape hours later.
This is more-or-less how most cloud backups work. You copy your “premium” SSD to something like a shingled spinning rust (SMR) that behaves almost like tape for writes but like a disk for reads. Then monthly this is compacted and/or archived to tape.
[1] For some values of nobody.