George,
   I am wondering what aspect of validation are you particularly
concerned with? What are the scenarios where you would assume that an
image is "crap" as you say.
For example, do you consider the failure to acquire bios memory as an
invalid image, despite the fact that it may be irrelevant for many
investigations? If the tool appears to have some data in a particular
region do you assume this data is valid or is the data from another
region? (Does the tool give you what should be there? What exactly is
the true data?).
Many regions can not be acquired safely (such as dma mapped regions),
do you consider that a tool should be allowed to skip these? How does
the tool find these regions in the first place? Querying the OS about
dma mapped regions is trivial to hook and can allow malware to safely
hide in regions where the tool is not going to image.
In practice, anti-forensics is a bigger threat. At the end of the day
you can never be fully convinced that the tool is resilient to that,
since you are running on the owned box. I think we need to acknowledge
there are limitations in acquisition techniques and not claim that if
a tool does not do X or Y it is "crap". There is no perfect technique,
since if there was, a rootkit would be able to easily circumvent it.
Regarding your argument of invalidating the tool due to an
inconsistent internal memory structure. How do you account for
acquisition smear introducing subtle errors in the memory data itself
- speed of acquisition is important to minimize this smear, but its
always there. Just because the internal data structures are not
consistent in a particular image does not mean the tool is invalid -
this is just the nature of memory forensics.
Your idea of locking pages into memory and checking they come out in
the image sounds like a nice idea. Unfortunately IMHO it may lead to
even more problems than you might imagine. If a significant amount of
physical memory is locked in at the same time, the kernel incurs
memory pressure which might force many other pages to be flushed to
the page file. This also has a dramatic effect on the performance of
the machine, increasing acquisition time significantly. This alone
damages the image quality by increasing smear between the locked
regions and the unlocked regions, not to mention you lose most of your
pages to swap.
Also unfortunately this idea will also alert any rootkit to get out of
the locked memory region, so the quality of the image is compromised.
Michael.
On 9 March 2012 18:06, George M. Garner J.r (online)
<ggarner_online(a)gmgsystemsinc.com> wrote:
  Tom,
  If you have anything I'd love to see it.
 Unfortunately nothing for public distribution.  We provide our customers
 with a test framework which is intended to help validate our tools for the
 production of evidence suitable for admission in a court of law.  A lot of
 work remains to be done.
 As you know volatile evidence collection tools cannot be validated in the
 same manner as traditional computer forensic tools which are designed for
 the acquisition of non-volatile storage media.  There is no "image" that you
 can acquire and then reproduce.  Running computer systems are often modeled
 as a continuous time stochastic process.  By its nature a continuous time
 stochastic process cannot be measured in its entirety.  It can only be
 sampled.
 Nevertheless, while they cannot be measured in their entirety, running
 computer systems do possess two attributes which may make it possible to
 validate, or rather invalidate, memory acquisition tools.  The first
 attribute is that a running computer system is a STRUCTURED stochastic
 process.  It is not entirely random and could not run if it were.  The
 operating system, processor and other hardware define certain structural
 elements the presence of which may be inferred merely by the fact that the
 system is running.  These structural elements CAN be measured and if they
 are not found in precisely the right location then your memory "dump" is
 crap.
 The second attribute derives from the fact that Microsoft Windows (and
 probably most other general purpose operating systems) provide an API which
 permits the programmer/user to define a discrete time stochastic process
 within the context of the larger continuous time stochastic process.  By
 this I mean that you can load and lock a block of data with a known hash
 value into memory at a fixed location.  You can then acquire the memory
 "dump" and attempt to recover the data block from the memory dump.  If the
 number of samples is sufficiently large and the location of the samples is
 representative with respect to the location (below 4 GiB, above 4 GiB,
 "unmanaged" memory) and architecture (Intel, AMD, NUMA, non-NUMA) and amount
 of memory (< 4 GiB, > 4 GiB, > 16 GiB***) then it should be possible to make
 an inference as to the reliability of acquisition of memory as a whole based
 upon the reliability of acquisition of the samples.
 In any event, being able to exclude particularly unreliable memory
 acquisition tools would be a step forward, even if it falls short of
 validating the tools that remain for LE evidentiary purposes.
 I though that Volatility might be able to play a useful role in developing a
 public test framework if it were reworked to identify the present or missing
 structural elements (e.g. the missing page tables) in a documentable way.  I
 asked Aaron about it last year but never got any response.
 Regards,
 gmg.
 _______________________________________________
 Vol-users mailing list
 Vol-users(a)volatilityfoundation.org
 
http://lists.volatilityfoundation.org/mailman/listinfo/vol-users