This question comes up over and over again. What if the source system "suffers" from hard deletes (many old ones do, a row just disappears instead of being flagged as deleted), AND you need to do incremental loads, because full ones are just too expensive/time consuming/inefficient? Our best practice is to deal with the issue as quickly as possible (meaning, get to the point when you have a table that contains the historical records that are missing in the source system, but flagged as deleted). That then allows for "clean" operation downstream, either filtering them out in input mapping or in transformation step, or actually passing them all the way to the end system and filtering there - that way there's no risk to referential integrity etc.

The simplest way to achieve this is to, besides your incremental fetch of data from an object/table ALSO run a full-load of all the ID's from the same table. Then a quick join in the first transformation that touches this data can generate the "is deleted" flag (for rows present in the overall data that are missing from the list of ID's still present in the source system). Then make sure that only rows that changed (including the flag being toggled) are in the incremental output of that transformation.

Now you've contained the problem, and anything you will do downstream will not be affected (your data looks like the source system actually implemented deletes correctly).

Of course the above applies to incremental data fetch, different options will present themselves when using (in cases where it makes sense) binlogs as the data source.