I'm having trouble getting the linux support branch working.
I followed all the directions found here:
https://code.google.com/p/volatility/wiki/LinuxMemoryForensics
I just can't seem to get it to work. Here's what I'm getting:
joe@zuul:~/volatility/lin64-support$ python vol.py
Welcome to volshell!
To get help, type 'help()'
>>> session.filename = "/home/joe/dump"
Traceback (most recent call last):
File "<console>", line 1, in <module>
NameError: name 'session' is not defined
Can anyone help me with this, or point me to a download for the older linux
branch, which was previously working for me? Thank you very much.
--
Joe Sylve, M.S.
Senior Security Researcher
GIAC Certified Forensics Analyst (GCFA)
Digital Forensics Solutions, LLC
http://www.digitalforensicssolutions.com/
Hi,
I'm playing with the new packetscan.py module from issue 233:
http://code.google.com/p/volatility/issues/detail?id=233
I'm using it against a Windows XP SP2 image and get the following error:
$ volatility packetscan -f IR-XP-PC-20120302-memdump.mem --no-cache
Traceback (most recent call last):
File "/usr/local/bin/volatility", line 135, in <module>
main()
File "/usr/local/bin/volatility", line 126, in main
command.execute()
File "/usr/local/lib/python2.7/dist-packages/volatility/commands.py",
line 101, in execute
func(outfd, data)
File "/usr/local/lib/python2.7/dist-packages/volatility/plugins/packetscan.py",
line 100, in render_text
for source, dest in data:
File "/usr/local/lib/python2.7/dist-packages/volatility/plugins/packetscan.py",
line 91, in calculate
profile.vtypes.update(ippkt_vtype)
AttributeError: 'WinXPSP2x86' object has no attribute 'vtypes'
Does this module not support XP SP2?
I'm admittedly in over my head, but I'm trying to figure out how to
create a plugin to scan memory for a particular signature, and when
found, parse data following the signature at specified offsets. I was
examining this module as a possible template for my own module. I'm
wondering, is it recommended to create a new vtype when scanning for a
structure in memory?
Thanks, Mike
Hi All,
As you may be aware I have been working on the new upcoming version
of volatility. It has been a massive cleanup of the code base and a
rewrite of much of the underlying architecture. The main thrust of
this release is the development of a framework, which can easily be
integrated into other tools as a library. This mean removing all
globals, removing command line flags and removing any automated
magical stuff that happens behind the scenes. Redesign of the object
model so its more consistent with a library, and style cleanup of the
codebase to improve its quality.
The following is the summary of the main architectural changes:
- Domain specific profiles. The profiles are no longer associated with
the address spaces. You can have any number of profiles alive at the
same time (e.g. kernel space and user space). Profiles are simply
templates for instantiating CTypes on an address space. Therefore a
profile is not specifically tied into a particular address space in
any way. This also removes the obj_vm/native_vm confusion for
dereferencing across address spaces, as address space switching is now
done explicitly.
- Removal of VolatilityMagic - there are basically two use cases for
these - the first is just a profile specific constant, and the second
is the automated running of code to retrieve a value (e.g. kdbg scan).
Constants are now parts of the profile itself (for example in linux
the system map is stored as profile constants), and automated code
running is removed in favor of explicit plugin execution (which is
made easier now).
- The plugin architecture is updated - Previously it was very hard to
share code between plugins - this is now very simple. For example, any
plugin can call any other plugin. Plugins also export more intelligent
functions than calculate() - so they can be used as modules
themselves. Plugins are now also selectable according to the profile -
for example we have a linux "pslist" and a windows "pslist", so just
calling plugins.pslist() will get the right module depending on the
selected profile.
- The registry code is updated to remove automatic importation of
files. This allows for much simpler and more efficient packaging as
core modules are referenced directly. Loading of external plugins can
now be done via a Load() plugin. This architecture is better for use
in a framework.
- The main user interface is based on ipython. This makes the user
interface much easier to use - with command line completion helping
users throughout. Users can also run arbitrary code on the results and
we now can have light weight extension scripts - without needing to
write a proper plugin.
http://code.google.com/p/volatility/wiki/ScudettesBranch has some
information about the new interface, but I will post a screencast soon
that shows more.
- Style cleanups. This refactor cleans up the style. In particular
issues such as lines over 80 chars and legacy calls to base class
methods (i.e. not using super) are cleaned up.
- The new release includes memory acquisition tools as well as linux support.
What should we do about it?
The current version is a major rewrite of the current code base - it
is not a patch to the current codebase and retrofitting it would be
too difficult. We need to port the existing modules to the new code
base and clean up this code. All developers need to consider the
changes present in this version and how they apply to the next
volatility release. If people have general architectural
suggestions/disagreements etc we need to resolve these now. We then
need to create a new volatility 3.0 branch and complete the porting of
the modules to it.
I have made a spreadsheet to document what modules are converted to
the new framework and which are not done yet:
https://docs.google.com/spreadsheet/ccc?key=0Athc84IflFGbdHpGcmNjWFkycFN4Q0…
I am making slow progress on porting the code over to the new code
base. I am happy with the way things are looking, and the new code is
looking very nice. If people want to help, please let me know and I
will provide editing rights and you can put your name next to each
item to take ownership.
I suggest all major new development to go to this branch (such as new
modules etc).
I am looking to complete a release candidate in the next month or two,
as there is really not that much left to do. I will be running a
workshop at DFRWS about volatility and will want to have the new
release definitely out by then.
Thanks,
Michael.
Hi,
I want to do memory analysis for a single process running in the
memory. Is this possible with "Volatility" ?
I am calling this executable file from a perl script. Now when this
executable file will get loaded in the memory, I want to see how much
memory it consumes.
Furthermore, I want to plot a graph for this executable process
against memory consumed by the executable file (for it's entire life
cycle).
This is very important since I need to compare three different tools
against their memory requirement.
Thanks in advance.
Regards,
Akash.
Hi guys!
Do you have any plans concerning pagefile.sys integration into Volatility?
Memory dump and pagefile joint analysis would be helpful and could help
with reconstructing the image of all running processes and the whole system
itself, don't you think so?
Btw, thank you all for the great framework!
--
Thanks,
Anton
Hi Guys,
I just wanted to send a quick email to let you know of where the
scudette branch is heading in the near future. The main goal of
development is to create a volatility library that can be easily used
from external programs, and also to make it far easier for people to
use volatility and in particular automate it through simple scripts
(without needing to necessarily write a plugin).
The idea is to extend volshell to make it THE way to use volatility.
These are the major goals:
- Get rid of the caching system - its not needed if we run everything
from inside volshell since command instances can just do their own
caching if they need it.
- Separate the concept of a profile from the address space - currently
the address space carries the profile (as.profile) but really has
nothing to do with it. In the new implementation the config object
carries both address spaces, and profiles. The refactor removes the
global obj.Object() factory and instead moves this factory to the
profile - this strengthen the relation between the profile and objects
it create (i.e. it is the profile which actually creates the object
instances on a given address space.).
- Remove all the magic around the place - currently when I want to do
a pslist, it automatically does a scan behind the scenes, which may or
may not work. The new architecture removes automatic behind the scene
actions, and requires the user to explicitely set them. If a command
requires a value for KDBG offset, this must be provided (i.e. the user
will scan for it first, then provide the correct value). Since all
this happens in the shell we dont really have a high startup time each
time - so the user can just try the first result from the scan
themselves (this is what happens now anyway but if the first his is
incorrect things break badly without users knowing what to do next).
- The current VOLATILITY_MAGIC member of the profile will go into a
constants attribute of the profile. This is essentially the same as
the linux profile's idea of a smap (the system symbol map). It also
makes it much more intuitive for the user that they can just get
profile specific constants by getting profile.constants["KDBGHeader"].
Automatic code executing by the volatility magic will disappear as I
mentioned.
- The new system automatically exports all command modules as volshell
commands (which return the instance). This makes it easy to call
different command plugins from inside the shell - with no additional
startup time (profile parsing etc). Also since we return a handle to
the command plugin, it can export all its useful methods for people to
use. For example:
class PSList(....)
def list_tasks(self):
""" A generator over all the tasks."""
def render_text(self):
for task in self.list_tasks():
... pretty print in a table...
There is no longer confusion over exactly what kind of data
calculate() returns since it does not really matter (The calculate
method for all plugins will be renamed something more suggestive of
its function). In the shell one simply does:
for task in pslist().list_tasks():
print task.comm
or:
pslist().render()
or:
pslist().render_text()
To see the old output.
Also in the new code the name of the command plugin is unrelated to
the class name, so you can have multiple implementations of pslist
which are selected based on the profile. (e.g. a windows pslist and a
linux pslist). In the volshell, launching pslist, will pick the right
one.
In addition to this we will no longer have complex and brittle
inheritence structure for commands. Currently we inherit from a
command if we need to use one of its methods (e.g. dlllist, pslist,
moddump etc all share the same base class because they all need to
list tasks!). This will simply not be necessary since any module could
easily call any other module through a global command factory - so for
getting moddump to get a process list, a plugin can just do:
for task in self.RunModule("pslist"):
....
without needing to be inherited from pslist directly.
Also inside volshell (if you use ipython) we can easily run scripts.
For example this is one of my scripts to find the current process
(i.e. python) task struct in memory (This is intended to run on
/dev/pmem):
import os
for task in pslist().list_tasks():
if os.getpid() == task.pid:
print "My task is at 0x%X (pid %s)" % (task.obj_offset, task.pid)
break
Then from ipython I can run this script in the context of the shell:
In [2]: run -i /home/mic/test.py
My task is at 0xFFFF880013C1AE40 (pid 1315)
and in the end the variable "task" will contain my own task. If I
still want a pretty printed table:
pslist().render_text()
An added bonus is that I can use ipython's awesome double tab
expansion to see all the members of task:
In [2]: task. (tab-tab)
Display all 200 possibilities? (y or n)
task._CType__initialized task.bio_list
task.io_context
task.__class__ task.blocked task.ioac
task.__delattr__ task.btrace_seq
task.irqaction
task.__dict__ task.cast
task.is_valid
....
This is an awesome interactive experience for volatility.
Thanks,
Michael.
Hi Folks,
I was recently trying to dig up a quick, easy method for
parsing Windows commandline history records out of memory dumps, and
came across a reference to Eoghan Casey's 2010 article. Extracting
Windows command line details from physical memory. When I pinged him
about the cmd_history.py Volatility plugin he wrote along with that
paper, he said he'd sent it in to the Volatility development group, and
had presumed it would be included at some point. I've been digging
around, but I can't find it. Any idea what happened to it?
Thanks
John
----------------------------------------------------------
Quis custodiet ipsos custodes?... I do!
It seems that Volatility uses a I/O packet size that's to large for my
system.
Thanks to Freddie Witherden for supporting me.
Using a small dumping application (see below) provided by Freddie I was
successfully able to dump that 2GiB of RAM.
So I transferred this thread to vol-dev.
CU
Michael
Hello all,
to test if the I/O packet size is the only problem I have adapted ieee1394.py in that way:
def read(self, addr, length):
"""Lowering packet size"""
self._device.request_size = 1024
"""Reads bytes from the specified address"""
return self._device.read(addr, length)
So there's no I/O error reported by forensic 1394 anymore but the data seems to be not mappable:
# python vol.py -l firewire://forensic1394/0 pslist
Volatile Systems Volatility Framework 2.1_alpha
No suitable address space mapping found
Tried to open image as:
...
FirewireAddressSpace: Must be first Address Space
FileAddressSpace: Must be first Address Space
I have no clue what went wrong. Any help appreciated...
CU
Michael
--
NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurück-Garantie!
Jetzt informieren: http://www.gmx.net/de/go/freephone
# Simple dumping application
from forensic1394 import Bus
from time import sleep
b = Bus()
# Enable SBP-2 support for access to Linux/Windows targets
b.enable_sbp2()
# Sleep to give the bus time to reinitialise
sleep(2.0)
# Attempt to open the first attached device d = b.devices()[0]
d.open()
# If this fails, try reducing the request size
d._request_size = 512
f = open('memorydump','wb')
for addr in range(1*1024*1024,2*1024*1024*1024,1*1024*1024):
data = d.read(addr, 1*1024*1024)
f.write(data