Interesting message, especially the part explaining the differences between
imaging and sampling.
I also contacted the authors and express my feelings about the lack of
accuracy of the document, and explained at the same time that the
highlighted "flaws" aren't really new or unique and that it would have been
been for the authors to also mentioned to non-developers who will read this
paper thinking they are providing solutions that as long as you are using
software acquisition based *regardless* of the acquisition methods you
gonna use, if you are running as the same level of above any malicious
program you are *always* running under a risk and there is *no* solutions
to that.
If an attacker has (physical) access to your box, it's game over.
Best,
On Mon, Aug 12, 2013 at 7:46 AM, George M. Garner Jr. <
ggarner_online(a)gmgsystemsinc.com> wrote:
This post is not specifically about Volatility;
however, I know that the
topic will be of interest to many list members. Reliable memory analysis
depends on reliable memory acquisition. No matter how good an analysis
tool is, it can only analyze what is in the memory "dump." If the memory
acquisition tool produces an incomplete or erroneous memory dump the
analysis will also fail or be misleading. It is thus with pleasure that I
note the progress that is being made toward developing a critical framework
for evaluating and testing memory acquisition tools. The latest
contribution is by Stefan Vömel and Johannes Stüttgen whose paper, "An
evaluation platform for forensic memory acquisition software" was presented
at the recent DFRWS conference and may be downloaded from the following
link:
http://www.dfrws.org/2013/**proceedings/DFRWS2013-11.pdf<http://www.dfrw…
.
The authors adopt an approach which they call "white-box testing" whereby
the authors modify the source code for various open source tools (win32dd,
mdd, winpmem) to insert hypercalls at various locations in the acquisition
process. The hypercalls inform the test platform of various important
system events and operations and inspect the state of the subject system at
the moment of the hypercall. The state as recorded by the hypercalls is
then used as a metric to evaluate the reliability (i.e. "correctness") of
the tool which is under test.
The approach has a number of significant limitations which the authors
acknowledge, the most significant of which is that they require the source
code which must be modified in order to perform the test. Most commercial
tool vendors do not provide the source code to their memory acquisition
tools. Even one of the open source tools tested by Vömel and Stüttgen,
win32dd, is an old version of the current closed-source Moonsols tool which
contains many bug fixes which are not in the open source precursor. As far
as I am aware the MDD tool is no longer supported or under active
development. Michael Cohen's winpmem is the only currently supported tool
that the authors were able to test.
Another significant limitation is that the test platform is tied to a
highly customized version of the Bochs x86 PC emulator. The test platform
is restricted to 32-bit versions of Windows with not more than 2 GiB of
memory. The acquisition of physical memory from systems equipped with more
than 4 GiB of memory and from 64-bit versions of Microsoft Windows are
areas where memory acquisition tool vendors have stumbled in the past.
Possibly all contemporary memory acquisition tools handle 64-bit systems
and systems with more than 4 GiB of memory correctly; however, we would
like to be able to test this and not rely solely on faith.
One limitation which the authors do not discuss is the impact of
restricting the test platform to a particular VM (i.e. Bochs). In our
experience VMs provide a much more highly predictable and stable
environment than real hardware and may not be a good indication of how a
memory acquisition tool will perform on real hardware. In addition, as was
mentioned previously on this list, different VM manufacturers have chosen
to emulate very different PC designs. How a tool performs on VMWare may
not be a good indicator of how the same tool will perform on Microsoft
Hyper-V or VirtualBox or KVM.
Also, the authors do not acknowledge the possibility that memory
acquisition tools may perform differently on different versions of the
Microsoft operating system. Each new version of the Microsoft operating
system has brought changes to the Windows memory manager, in some cases
significant changes. Currently, we are (out of necessity) using
acquisition methods that differ in varying degrees on Microsoft Windows XP,
Windows 2003 SP1 and later, Windows Vista, Windows 7 and Windows 8. For
Windows 8.1 Preview we have found it necessary to invent new methods that
are different from all of the methods which we previously employed.
Finally, the authors do not articulate the theoretical framework for
forensic volatile memory acquisition which serves as the basis for their
notion of "correctness." Historically, computer forensic experts have
evaluated the acquisition of volatile memory as an "imaging" process. Most
computer forensic experts were familiar with the imaging of "dead" hard
drives. It was natural to assume memory acquisition was doing much the
same thing. The problem is that a running computer system is by nature a
stochastic process (more precisely, it is a "continuous time stochastic
process") which cannot be "imaged." It can only be sampled. Sampling is
a
common technique in forensic science, other than in computer forensics.
Computer forensics is somewhat unique among forensic "sciences" in its
almost exclusive reliance upon "imaging" as a standard of reliability. It
is difficult not to note the severe tension which exists between this
theoretic framework and the reality of modern computer design. Not only is
"live" imaging of hard drives now commonplace for practical reasons (which
equally aren't true images) but also, as we now know, a hard drive is an
embedded computer system which may alter the physical layout of data on the
disks any time power is applied to the drive.
The theoretical approach which we have advocated (see, e.g. Garner, Maut
Design Document (unpublished manuscript, 2011), is to view volatile memory
acquisition as a process of sampling the state of a continuous-time
stochastic process. We further propose to use the structural elements
created by the operating system and the hardware as a metric to evaluate
the reliability of the sampling process. We believe that these structural
elements may be used for this purpose because they possess the Markhov
property. Their future state is predictable, depending entirely on their
present state and is conditionally independent of the past. A sample is
said to be reliable when it is "representative" of the entire population.
In other words, a sample is reliable with respect to a specific issue of
material fact if an inference drawn upon the basis of the sample will
arrive at the same conclusion as if the inference were drawn upon the basis
of the entire population.
The authors appear to assume the tradition "imaging" framework for
volatile computer memory acquisition, however, this assumption should be
stated since the entire rest of the paper depends on it.
In conclusion, this paper makes an important contribution to a topic that
is important for the future of computer forensics. However, the authors
need to better articulate their assumptions. Development of a professional
memory forensic tool testing platform will require the development of a
test enviroment which overcomes the current limitations.
Regards,
George M. Garner Jr.
President
GMG Systems, Inc.
______________________________**_________________
Vol-users mailing list
Vol-users(a)volatilityfoundation.org
http://lists.volatilesystems.**com/mailman/listinfo/vol-users<http://lis…