Hi George,
On 20 August 2013 18:54, George M. Garner Jr.
<ggarner_online(a)gmgsystemsinc.com> wrote:
I see that you were able to find the relevant passage
in the Intel
Developers Manual (Vol. 3A, section 11.12.4). The passages which you quote
do indeed state that the simultaneous mapping of a physical memory page with
both cached and writecombined (WC) memory caching attributes is of
"particular concern." The passage also provides one scenario of how data
corruption may result from WC writes to physical pages that are also cached.
What the passage does not say is that this is the ONLY circumstance under
which memory corruption may occur. Indeed, it clearly states that mapping
physical pages with incompatible cache attributes is a problem in general
which "may lead to undefined operations that can result in a system
failure." This interpretation is supported also by the recommended
procedure which an operating system is to follow when remapping a physical
page with incompatible cache attributes. The OS is supposed to render the
original mapping not present and flush the TLB of any processor which "may
have USED the mapping, even speculatively." (Emphasis added.) Reading from
a physical memory page, moreover, modifies the internal state of the
processor, including the TLB and, potentially, the L2 and L3 caches.
Indeed, if you read our paper carefully you may discover that we do
flush the TLB before every page mapping.
The DMA controller, as the name implies, directly
accesses physical memory
independent of the CPU. DMA does not use virtual addresses or page tables
or CPU caches. I fail to see the relevance of firewire memory acquisition
to this discussion. The DMA controller does guarantee that the processor
I was only illustrating that the DMA controller is able to view
physical memory without any L1 or L2 caches and hence by definition is
seeing the physical memory with a different caching attribute to the
CPU, which must go through caches, TLB and the MMU.
and various system buses have a coherent view of
physical memory. However,
memory coherence, as was previously state, does not prevent memory
corruption. If memory corruption does occur the DMA controller will
guarantee that the results are propagated to all of the system buses.
Im curious what you mean by corruption - you seem to be using this
term a lot. Do you mean that unexpected data is written to memory
locations? or do you mean that ram suddenly flips with no actual
writing operation? I think this is the point that I am most struggling
to understand, how can there be corruption if there is no write
operation?
The passage which you quote from MSDN concerning
STATUS_CONFLICTING_ADDRESSES is true of some versions of Microsoft Windows
and not of others. The memory manager has changed significantly over
succeeding versions of Windows; and the changes are not always reflected in
the product documentation. Windows 8.1 will require some significant
changes in how memory is acquired, at least if the preview is any
indication.
Having no intimate access to the windows source code i can only go
from the MSDN :-). I take your point about changes in the OS APIs over
time, some of which are not well documented. This is the advantage, I
think, of sticking to the low level hardware oriented approach
described in the paper. Our technique is essentially OS agnostic,
being implemented on OSX Linux and windows without problem. The only
variation we need to account for is architecture (e.g. 32bit vs. 64
bit).
People think that winpmem is a part of Volatility and
that it therefore must
be good. In fact, when I go to the link which you published
(
http://goo.gl/9VnnkY) "Volatility" is precisely what I see. There are
many people on the list who lack the technical background to properly
evaluate your code or understand its risks. I think we know that many
Indeed winpmem is an open source memory imager. Please note that the
technique we have been discussing in this thread is highly
experimental (and indeed that is why it was publishable in the first
place :-). Although the technique is implemented in the upcoming
release of winpmem it is not the default technique which is chosen by
the tool. The default technique is the traditional mapping section
object of the \\devices\physicalmemory device (which is implemented by
most imaging tools out there). However as the paper points out this
technique is highly vulnerable to anti-forensic techniques.
people are going to take this code and run it on their
production systems.
Indeed they have and do, with much reported success. Winpmem can and
is used regularly to enable Volatility to operate on the live system
image, which is great for quickly triaging a system, or for analyzing
the system remotely when copying the image is prohibitive. The fact
the winpmem is open source helps it to be integrated in larger,
enterprise grade incident response tools such as GRR
(
http://code.google.com/p/grr/)
I have to say that this code involves substantial
risk, and that you must
Running any code involves some element of risk. As our paper points
out, if the rootkit makes some small modifications, it is trivial to
blue box the machine when the imaging tool attempts to image it.
accept some responsibility for the potential
consequences given that we know
the public perceptions. The Order of Volatility also must accept some
responsibility since it allows this code to be published under its banner
without adequate warning of its experimental nature.
This is code contributed to the Volatility project since it is useful
to extend volatility to new use cases, and makes Volatility a complete
solution (imaging + analysis). The tool is out there for anyone to use
and test and contribute patches, unlike some commercial tools which
prefer not to be openly and impartially tested for fear of them being
leaked to the bad guys ;-).
Thanks,
Michael.