Disk Wiping

Most of us are sensitive about our personal informations.

I want to understand better this wiping process. No matter what you do with software level one cant destroy data, if disk itself is not destroyed.

But i dont understand. People say "no matter how much you wipe disk it just gets harder to recover data. Just going deeper is enough to recover wiped data". It cant be infinite. There has to be a limit of deepness.

Can you explain it to me?

Since i dont understand it fully, i had a hard time explaining myself. Hope you understood me.
There is a limit, but you can always get closer. Think of the mathematical limits. You can keep getting closer but you will never actually reach the limit.
Like golden ratio? You can never reach the center. But keep getting closer.
What exactly does it mean to "go deeper"?
From what i read, like layers. Everytime a data is written former data goes deeper and gets harder to reach
You're right. It isn't even close to infinite. In fact, it is just about the opposite.

http://www.nber.org/sys-admin/overwritten-data-guttman.html

Overwriting the data a few times is more than sufficient to make it impossible to recover the original electromagnetic bit patterns.

The bigger issue is how your file is distributed on the disk -- and what pieces of it are left lying around after erasure.

Hope this helps.
Wow. That article was hard to understand. But cleared questions in my mind.

Thanks everyone.
Ultimately the data a disk is an analog signal. When you overwrite it, with a new signal, traces of the previous signal are still there. With enough equipment and determination, the previous signal might be recovered.

But really, is your data so incredibly important to the national security of this or another country to warrant such measures? I know mine isn't.

I have two programs that I keep around to wipe out files and whole disks. They are very simple. The first one is called wipefile. It simply overwrites a file with random data. Of course I can't be completely certain, but it seems likely to me that when you overwrite a file, the OS doesn't allocate different sectors to the file, it simply overwrites the ones that are already there.

The second tool is called wipedisk. This just creates and writes one file after another with size just under 2GB until the disk fills up. Then it deletes them all. This process overwrites all unallocated sectors on the disk.
Ceset, I don't quite think that you understand how HDD work... All it is is a magnetic disk. I forget what element/compound is used, but when it is exposed to a magnetic field, it holds the polarity of that field until it comes into contact with another one.

So, you answer is yes: just get a big-ass magnet and run it over the disk, and no one will be able to recover anything. Even if they could recover a tangable piece of information, it wouldn't be enough to use.

Over-writing the entire diskwith random bits repeatedly would make it nearly impossible to recover anything. While Duoas' article suggests that someone could concievably recover over-written data with a microscope, I highly doubt that anyone would go over trillions of bits with a microscope because you're just that important.

dhayden said:
Ultimately the data a disk is an analog signal.


NOT TRUE. Data on a disk consists of positively charged or negatively charged areas. These ares are extemely small. A read head at the end of an actuator allows data to be read and written to the disk through an electromagnet.

dhayden said:
When you overwrite it, with a new signal, traces of the previous signal are still there


That's a farce. Yes: there are incredibly small differences in the polarity's charge on a bit, but it's such a small difference that it can't be measured with any instrument to date. If there is such an instrument, it would still take loads of effort to recover effectively erased data, and even then such equipment wouldn't be widely available.

I assume you're using winblows since you aren't using dd to create a block-encrypted device, because, you know, you're data's sooooo inportant! (facetious) In any case, all those programs do is over-write the files' contents, and then erase them, because I doubt windows actually has a way to let you tell it to write directly to the disk (after all, why would they make windows not such a piece of crap?). The bottom line is that unless you're completely erasing the partiton table, and then over-writing every bit on the device, you're not effectively erasing your data.

If you want to protect your data, the best thing you can do is create a block encryption device and save your important stuff on it, and only mount it when you really need it.

dhayden said:
the OS doesn't allocate different sectors to the file

You're right: the file gets fragmented accross countless sectors if you never defragment your hdd on windows. On linux, you never need to defragment your hdd, because linux doesn't suck. There is no way to keep any file in a single sector without limiting its size.

You shouldn't spread farcical information.
Last edited on
NOT TRUE. Data on a disk consists of positively charged or negatively charged areas. These ares are extemely small. A read head at the end of an actuator allows data to be read and written to the disk through an electromagnet.
I don't see how this is a refutation that data is stored analogically.
Which is actually true. Data is stored as the relative magnetic orientation of microscopic rust particles. A particle may become oriented in neither direction, for example by pointing orthogonally.
That said, the particular encoding used makes data remanence difficult, except in case of storage failure.

it's such a small difference that it can't be measured with any instrument to date
[citation needed]

because, you know, you're data's sooooo inportant!
This is the same retarded line of thought behind the "you should not fear surveillance if you have nothing to hide" argument.
Your data is exactly as valuable as other people think it is, not as you think it is.

I doubt windows actually has a way to let you tell it to write directly to the disk (after all, why would they make windows not such a piece of crap?)
The API actually makes it pretty trivial to write such a program. But why would you bother checking your facts when it's easier to just post uninformed garbage?

You're right: the file gets fragmented accross countless sectors if you never defragment your hdd on windows. On linux, you never need to defragment your hdd, because linux doesn't suck.
It doesn't matter how many times Linux zealots repeat it, it won't make any of those two three claims any less false.
In any case, fragmentation has nothing to do with what dhayden was saying. Maybe a little less proselytism and a little more reading comprehension next time, hmm?
While Duoas' article suggests that someone could concievably recover over-written data with a microscope

No, the article (and my attendant comments) explicitly state exactly the opposite.

Pay attention now.
IWishIKnew said:
... I doubt windows actually has a way to let you tell it to write directly to the disk (after all, why would they make windows not such a piece of crap?).


In addition to what helios has already said, the command to do this is fsultil file setzerodata <filename> from an elevated command prompt.

To be thorough on 7 and server *, you need to query the shadow-copy providers to enumerate any previous instances of that volume and wipe those as well. Since compiling these alternate copies with what can be recovered is primarily how data recovery software works.

Also, Windows Vista is when background de-fragmentation started with a service stub called "Disk Defragmentor". Before that point the concern was with disk through-put in reallocating file segments since Windows XP is more then ten years old and the hardware it was designed for was not as fast as it is now.

I'm asking because I want you to be honest here, where do you get this stuff? It seems just a little too much for you to be making it all up on the spot.
Thanks Helios.

I think IWishIKnew misunderstood much of what I was saying.

NOT TRUE. Data on a disk consists of positively charged or negatively charged areas.
Yes. In other words, an analog signal.

That's a farce. Yes: there are incredibly small differences in the polarity's charge
You say it's a farce and in the next sentence agree with me....

There is no way to keep any file in a single sector without limiting its size.
Let me try to clarify. Say you have a file on disk that is 100k. You open it for writing, and write 1k of data at the beginning of the file. The issue is whether that will overwrite the sectors already allocated to the disk, or whether the file system might allocate 2 new sectors (assuming 512 byte sectors) to the file and write to them, marking the previous sectors as unallocated space. Let me be clear that in both cases I'm not talking about changing the size of the file at all. If the space allocated to a file remains allocated when you write to it, them a tool like my wipefile utility will erase the data, at least to the extent that recovering the previous generation signal gets much harder as described before.
Topic archived. No new replies allowed.