Tom,
If you have anything I'd love to see it.
Unfortunately nothing for public distribution. We provide our customers
with a test framework which is intended to help validate our tools for
the production of evidence suitable for admission in a court of law. A
lot of work remains to be done.
As you know volatile evidence collection tools cannot be validated in
the same manner as traditional computer forensic tools which are
designed for the acquisition of non-volatile storage media. There is no
"image" that you can acquire and then reproduce. Running computer
systems are often modeled as a continuous time stochastic process. By
its nature a continuous time stochastic process cannot be measured in
its entirety. It can only be sampled.
Nevertheless, while they cannot be measured in their entirety, running
computer systems do possess two attributes which may make it possible to
validate, or rather invalidate, memory acquisition tools. The first
attribute is that a running computer system is a STRUCTURED stochastic
process. It is not entirely random and could not run if it were. The
operating system, processor and other hardware define certain structural
elements the presence of which may be inferred merely by the fact that
the system is running. These structural elements CAN be measured and if
they are not found in precisely the right location then your memory
"dump" is crap.
The second attribute derives from the fact that Microsoft Windows (and
probably most other general purpose operating systems) provide an API
which permits the programmer/user to define a discrete time stochastic
process within the context of the larger continuous time stochastic
process. By this I mean that you can load and lock a block of data with
a known hash value into memory at a fixed location. You can then
acquire the memory "dump" and attempt to recover the data block from the
memory dump. If the number of samples is sufficiently large and the
location of the samples is representative with respect to the location
(below 4 GiB, above 4 GiB, "unmanaged" memory) and architecture (Intel,
AMD, NUMA, non-NUMA) and amount of memory (< 4 GiB, > 4 GiB, > 16
GiB***) then it should be possible to make an inference as to the
reliability of acquisition of memory as a whole based upon the
reliability of acquisition of the samples.
In any event, being able to exclude particularly unreliable memory
acquisition tools would be a step forward, even if it falls short of
validating the tools that remain for LE evidentiary purposes.
I though that Volatility might be able to play a useful role in
developing a public test framework if it were reworked to identify the
present or missing structural elements (e.g. the missing page tables) in
a documentable way. I asked Aaron about it last year but never got any
response.
Regards,
gmg.