hi,
I encountered a problem when I tried to parse the inode object in
Linux 2.4.20( I work
on Linux 2.4.20 and Linux 2.6.23). My ObjectReader class, which is
used to read object
from memory dump file and parse them, didn't work well. I debuged step
by step and
found that the size of inode in memory is bigger that its size which
is calculated by its
defination.
Conserding never a kernel patch has been used, I suppose it's byte align which
cause this problem. However, gcc has a complex mechanism dealing with
byte align-_-!!
This may be called the third problem when examiners are faced with
memory forensics
I think.Examiners should note members which are defined as char or
followed by another
kernel object.Then more tests should be performed to confirm
preliminary results.
As I am not familiar with how gcc deals with byte aligh, I prefer to
test all important kernel struct
and record whether and where they have a byte align problem in a
database. Examiners can
then modify the profile according to the database or order
ObjectReader to look up the
database before it read and parse objects.
Does anyone have other ideas? I'd like new ones.
Yuhang Gao
2010/1/4 Michael Cohen <scudette(a)gmail.com>:
Hi Yuhang,
- No use of pool tags so scanning needs to be
much more thorough.
I developed about three methods of searching for kernel objects. And
I am writing some toy applications to test them. As far as I can see,
the results are satisfying.
Usually scanning for objects without pool tags means running many
tests for sanity of various fields. For example, if a field can only
take on one of several values you can eliminate positions which dont
make sense. This typically takes a long time as you go from one
offset, check it, eliminate it, and move over one byte etc.
Our approach in volatility is to order the tests in such a way that we
do the tests which are difficult to get randomly first. For example,
say you have a certain struct with:
struct list_head list;
int flags;
Suppose the flags is only ever 0 or 1 - so we have two tests to see if
the struct makes sense:
1) Follow the flink on the list head and see if the blink of it points
to our struct
2) check if flags is 0 or 1
For a number of reasons test 1 is more expensive - because it usually
requires paging out and dereferencing pointers.
We could do test 2 first and check if each byte is 0 or 1, and if not,
move to the next byte and check it. If it is then we do the other test
as well and if everything looks good we can guess this is the struct.
A much better alternative though is to just run the tests over all
bytes of 0 or 1 - so we can automatically skip all types which are not
0 or 1 without even having to check them. This is much quicker than
advancing by one byte each time and redoing the test.
This is kind of the logic behind the new scanning system. Look at
http://code.google.com/p/volatility/source/browse/branches/Volatility-1.4_b…
for an example. Basically you define ScannerCheck classes which have
different methods. The skip method allows a check to skip a number of
bytes which are definitely not going to match. The check() method
returns True if its possible there is a match here, False otherwise.
Then a scanner is just a class which collects all these checks
together, and this can be applied on the image. It will yield possible
matches.
- The structs are very variable
Yes, this problem annoys me a lot untill I took a few days to write a
java program
to deal with this. I put all data structures I need into a file (c
style and you may call
it profile) and my java program parses these data structures. The
resluts tells me
the offset of each member in kernel objects and their super object.
Our framework calculates the offsets during run time, so its possible
to tweak the profile at run time and reapply it and hopefully converge
on something sensible.
Your advice to just come up with a simple task
(like scan for
task_structs) and then
write a plugin to deal with it is very good. And my suggestion is to
apply different
strategies in searching for kernel objects. The more, the better.
Definitely - the new volatility framework allows you to apply any
number of checks on the scanner, and there is good support for command
line options or configuration to allow the user to decide which checks
should be used (e.g. fast, pedantic, loose matching etc).
Besides, what do you think is the most important
in Linux memory forensics. The
files, processes, network connections and so on. What else then ?
At the very least being able to do the same as windows - dump
processes, find packets, tasks sockets etc.
You mentioned your new scanning framework. Has
it finished yet? You
Yes it is used in a number of plugins.
said I would
learn how the new framework works. In a few days or something else ?
Its not difficult to learn - just look at some of the other plugins.
As always documentation is a bit lacking at this point - but we try to
make the code as readable as possible.
And is there
anything I can help?
We really would like to push forward the linux problem. It would be
nice to have even a small set of plugins which work well on linux
images - i think it would benefit pushing the portable side of
volatility so we can support more targets.
Then for Windows, it's very interesting to
learn that microsoft
coroperation has a windows
research kernel. Debug and disassembling are two ways to understand
their kernel objects.
Even if we locates the kernel objects, we probably don't understand
how we can make
use of them. Any ideas?
Many kernel structures are well documented, and lots have symbols too.
Clearly a deeper understanding of undocumented structures would help,
but there are many things we can do with documented stuff too.
I am going to Beijing after Spring Festival and I
will be engaged in
preparing for GRE
for a long time. Probably, I won't have time to spend on memory
forensics. Therefore,
I make my determination to do as much as help I can now.
Have fun, Its always good to have contributions :-)
Michael.