I built LiME from the tarball on the project site (not latest svn) and was able to dump memory successfully (type=lime). After many trials and tribulations I was able to get the Volatility profile built for CentOS 5.3x64 (had to remove pmem from the Makefile). I put the profile in the correct directory, and vol.py --info lists it as expected, however when I try to use the profile with the memory image I get an error.
chort@hydra:~/code/profiles-volatility/CentOS_5.3_x64$ vol.py --profile=LinuxCentOS_5_3x64 -f /fun/ir/geriatrix.lime linux_lsmod
Volatile Systems Volatility Framework 2.3_alpha
WARNING : volatility.obj : Overlay structure cpuinfo_x86 not present in vtypes
No suitable address space mapping found
Tried to open image as:
MachOAddressSpace: mac: need base
LimeAddressSpace: lime: need base
WindowsHiberFileSpace32: No base Address Space
WindowsCrashDumpSpace64: No base Address Space
HPAKAddressSpace: No base Address Space
VirtualBoxCoreDumpElf64: No base Address Space
VMWareSnapshotFile: No base Address Space
WindowsCrashDumpSpace32: No base Address Space
JKIA32PagedMemoryPae: No base Address Space
AMD64PagedMemory: No base Address Space
JKIA32PagedMemory: No base Address Space
IA32PagedMemoryPae: Module disabled
IA32PagedMemory: Module disabled
MachOAddressSpace: MachO Header signature invalid
MachOAddressSpace: MachO Header signature invalid
LimeAddressSpace: Invalid Lime header signature
WindowsHiberFileSpace32: PO_MEMORY_IMAGE is not available in profile
WindowsCrashDumpSpace64: Header signature invalid
HPAKAddressSpace: Invalid magic found
VirtualBoxCoreDumpElf64: ELF64 Header signature invalid
VMWareSnapshotFile: Invalid VMware signature: 0xf000ff53
WindowsCrashDumpSpace32: Header signature invalid
JKIA32PagedMemoryPae: Incompatible profile LinuxCentOS_5_3x64 selected
AMD64PagedMemory: Failed valid Address Space check
JKIA32PagedMemory: Incompatible profile LinuxCentOS_5_3x64 selected
IA32PagedMemoryPae: Module disabled
IA32PagedMemory: Module disabled
FileAddressSpace: Must be first Address Space
ArmAddressSpace: Incompatible profile LinuxCentOS_5_3x64 selected
On a hunch I checked the directory I built the profile in (copied headers & source from the target system):
chort@hydra:~/code/profiles-volatility/CentOS_5.3_x64$ grep cpuinfo *
System.map-2.6.18-128.el5:ffffffff8006f328 t show_cpuinfo
System.map-2.6.18-128.el5:ffffffff80103251 t cpuinfo_open
System.map-2.6.18-128.el5:ffffffff8020eadb t show_cpuinfo_max_freq
System.map-2.6.18-128.el5:ffffffff8020eafa t show_cpuinfo_min_freq
System.map-2.6.18-128.el5:ffffffff8020f759 t show_cpuinfo_cur_freq
System.map-2.6.18-128.el5:ffffffff802f0bc0 D cpuinfo_op
System.map-2.6.18-128.el5:ffffffff80308420 d proc_cpuinfo_operations
System.map-2.6.18-128.el5:ffffffff803319a0 d cpuinfo_cur_freq
System.map-2.6.18-128.el5:ffffffff80331b20 d cpuinfo_min_freq
System.map-2.6.18-128.el5:ffffffff80331b60 d cpuinfo_max_freq
Platform running Volatility (2.3_alpha, latest from svn):
Linux hydra 3.2.0-35-generic #55-Ubuntu SMP Wed Dec 5 17:42:16 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Source of memory image:
Linux geriatrix.smtps.net 2.6.18-128.el5 #1 SMP Wed Jan 21 10:41:14 EST 2009 x86_64 x86_64 x86_64 GNU/Linux
What am I missing?
--
chort
Hi all,
I'm currently attempting to code up a bitmap (within an overlay) that consists of an array of 4 ulongs.
With (say) a single ulong, the following works great:
profile.merge_overlay({
'XXX': [ None, ['Flags', {'target': 'unsigned long', 'bitmap': { 'A': 0, 'B': 1, 'C': 2 }}]]
})
However, the obvious generalisation to 4 ulongs:
profile.merge_overlay({
'XXX': [ None, ['Flags', {'target': ['array', 4, ['unsigned long']], 'bitmap': { 'A': 0, 'B': 1, 'C': 2 }}]]
})
fails. Looking at the source, the profile.merge_overlay calls:
obj.Object(['array', 4, ['unsigned long']], offset=0, ..)
and this function in turn raises an exception (i.e. TypeError: unhashable type: 'list') when it calls:
vm.profile.has_type(['array', 4, ['unsigned long']])
Attempts at using obj.Array instead also flounder.
Does anyone have any hints or tips as to how best to deal with bitmaps that are arrays of bytes, ulongs or similar? Is it a case of having to extend the obj.Flags class so that such things can be handled?
Many thanks,
Carl.
We are happy to announce that our memory forensics training course
will be going to the Netherlands in September:
http://volatility-labs.blogspot.com/2013/04/memory-forensics-training-nethe…
This course is taught directly by Volatility developers, and will
provide intense training in memory forensics for incident response,
malware analysis, and digital forensic investigation.
This will be our only course outside of the USA in 2013, and we have
already had a number of people inquire about attending, so please
contact us ASAP if you are interested in taking it.
Thanks,
Andrew (@attrc)
Hi all,
I've just created a profile for my Ubuntu 12.04 (3.5.0-25) and I've
dumped the memory using virtualbox guestcoredump.
Using the linux_proc_maps plugin I get the following output:
http://paste.ubuntu.com/5576450/
I was expecting similar output to "cat /proc/<pid>/maps". As you can
see, these "-0x4...000" addresses are obviously wrong. Is this I am
doing wrong myself, or is this a bug? It happens for other processes
as well.
If this is a bug I'll make a new issue in the tracker with the steps
I've followed to produce this.
Cheers,
Edwin
HI,
There was some talk on the #volatility irc channel.. Won't go into details,
Basically, wondering how one can use vtop from volshell as it is not a
plugin.
thanks
Dear all,
I was hoping that someone might be able to clear up a query I have wrt Windows memory and how it handles memory pages (specifically information regarding the pages executable permissions). I'm assuming that PAE is in use here.
The idea is that we have some page (holding virtual address addr) within a processes address space and wish to know if that page is executable.
Within user space, we can use the VADs and obtain executable information via vad.u.VadFlags.Protection - vadinfo.PROTECT_FLAGS allows the returned value (as an int) to be converted into a string representation.
Within kernel space, we can use the PTE to determine information regarding the pages exec status - for example:
def get_page_nx_bit(addrspace, addr):
pdpte = addrspace.get_pdpte(addr)
pde = addrspace.get_pde(addr, pdpte)
pte = addrspace.get_pte(addr, pde)
return pte >> 63
gets the NX bit for the PTE associated with a given page address.
Now, in comparing bit 63 of the PTE entries against the VAD protection permissions in user space, I'm noticing the occasional difference. Naively, I'd expected the PTEs to agree with the protection information on the VADs (within user space).
Any help or pointers to information resolving the above is very much appreciated.
Many thanks,
Carl.
Thanks Michael. My whole purpose in this instance was to test the nbdserver
project for my blog post. That's why I used this particular method. I'll
follow up with Jeff and let him know what you came up with.
Thanks!
Ken
On Mon, Apr 15, 2013 at 3:51 PM, Michael Cohen <scudette(a)gmail.com> wrote:
> Ah that explains the crash then....
>
> You can not read the entire physical address space from start to finish
> because you will hit DMA regions which will cause your reads to activate
> PCI devices and blue screen the box. You must skip the regions which are
> reserved to PCI devices.
>
> The winpmem device driver allows you to read where ever you want (i.e. it
> does not protect you from shooting yourself in the foot :-). The user space
> application is supposed to figure out where its safe to read. From quickly
> looking at the code for nbdserve
>
> https://github.com/jeffbryner/NBDServer/blob/master/main.cpp
>
> it does not seem to implement the correct algorithm for skipping the
> reserved memory regions (it should make an IO control to the pmem driver
> and ask it). There seems to be some code there to do it but it does not
> seem complete to me. Maybe best ask the author?
>
> Anyway for the purpose of experimenting you could just manually write a
> short bash script with dd involving the skip and seek parameters to skip
> over the reserved regions and only image the available regions. This should
> not crash.
>
> If you goal is just to obtain the image over the network, why not use
> netcat like the example demonstrates?
>
> Thanks,
> Michael.
>
>
>
>
>
> On 15 April 2013 22:34, Ken Pryor <kdpryor(a)gmail.com> wrote:
>
>> Hi Michael,
>>
>> Yes, I ran winpmem on the subject machine and allowed it to save an image
>> file to that machine successfully. In my nbdserver test, I ran winpmem -l
>> and verified the service was running. I went to the nbd Linux client and
>> began the process of imaging pmem from the subject computer via the
>> network. Running nbd-client on the Linux workstation, I assigned the pmem
>> output coming over the network to /dev/nbd0. I used the following command
>> line then to image:
>>
>> dd if=/dev/nbd0 of=./ramoutput.dd
>>
>> This has been successful on 32 bit XP machines, but it dies on the 64 bit
>> machine. If this description doesn't make sense, I'll try to do a better
>> description later this evening.
>>
>> Ken
>>
>>
>>
>> On Mon, Apr 15, 2013 at 12:16 PM, Michael Cohen <scudette(a)gmail.com>wrote:
>>
>>> Hi Ken,
>>> I have not had a chance to play with nbdserver. Are you saying that
>>> winpmem acquisition to the local disk completed ok?
>>>
>>> Did you manage to image with winpmem over a socket and netcat? Or are
>>> you trying to image to a network share?
>>>
>>> Thanks,
>>> Michael.
>>>
>>>
>>> On 15 April 2013 18:25, Ken Pryor <kdpryor(a)gmail.com> wrote:
>>>
>>>> I recently used the latest version of winpmem in conjunction with Jeff
>>>> Bryner's nbdserver to acquire ram for a couple different systems in support
>>>> of a blog post I was writing. Acquisition of 1 gb memory from an XP 32 bit
>>>> vm via the network worked perfectly.
>>>>
>>>> However, acquiring memory from a 64 bit Win 7 physical system with 12
>>>> GB ram failed. It would start okay, but would freeze up and reboot the Win
>>>> 7 system at 3.5 GB every time when being acquired using nbdserver via the
>>>> network. Using winpmem directly on the machine works successfully, but
>>>> fails on the network.
>>>>
>>>> Any suggestions as to the problem? I can provide any data or follow up
>>>> testing if needed.
>>>>
>>>> Ken
>>>>
>>>> _______________________________________________
>>>> Vol-users mailing list
>>>> Vol-users(a)volatilityfoundation.org
>>>> http://lists.volatilityfoundation.org/mailman/listinfo/vol-users
>>>>
>>>>
>>>
>>
>
Hi Michael,
Yes, I ran winpmem on the subject machine and allowed it to save an image
file to that machine successfully. In my nbdserver test, I ran winpmem -l
and verified the service was running. I went to the nbd Linux client and
began the process of imaging pmem from the subject computer via the
network. Running nbd-client on the Linux workstation, I assigned the pmem
output coming over the network to /dev/nbd0. I used the following command
line then to image:
dd if=/dev/nbd0 of=./ramoutput.dd
This has been successful on 32 bit XP machines, but it dies on the 64 bit
machine. If this description doesn't make sense, I'll try to do a better
description later this evening.
Ken
On Mon, Apr 15, 2013 at 12:16 PM, Michael Cohen <scudette(a)gmail.com> wrote:
> Hi Ken,
> I have not had a chance to play with nbdserver. Are you saying that
> winpmem acquisition to the local disk completed ok?
>
> Did you manage to image with winpmem over a socket and netcat? Or are you
> trying to image to a network share?
>
> Thanks,
> Michael.
>
>
> On 15 April 2013 18:25, Ken Pryor <kdpryor(a)gmail.com> wrote:
>
>> I recently used the latest version of winpmem in conjunction with Jeff
>> Bryner's nbdserver to acquire ram for a couple different systems in support
>> of a blog post I was writing. Acquisition of 1 gb memory from an XP 32 bit
>> vm via the network worked perfectly.
>>
>> However, acquiring memory from a 64 bit Win 7 physical system with 12 GB
>> ram failed. It would start okay, but would freeze up and reboot the Win 7
>> system at 3.5 GB every time when being acquired using nbdserver via the
>> network. Using winpmem directly on the machine works successfully, but
>> fails on the network.
>>
>> Any suggestions as to the problem? I can provide any data or follow up
>> testing if needed.
>>
>> Ken
>>
>> _______________________________________________
>> Vol-users mailing list
>> Vol-users(a)volatilityfoundation.org
>> http://lists.volatilityfoundation.org/mailman/listinfo/vol-users
>>
>>
>
Hey all,
So far I am unable to get hivedump to work specifying an offset in
either interactive or non-interactive mode:
Offset(V) Offset(P) Name
0xe1036b60 0x4948ab60
\Device\HarddiskVolume1\Windows\system32\config\SYSTEM @ 0xe1036b60
./vol.py --profile=WinXPSP3x86 -f ../20130412-194645.raw hivedump -o
0xe1036b60
**************************************************
Hive -
Last Written Key
------------------------ ---
ERROR:root:Error: invalid literal for int() with base 10: 'x'
ERROR:root:invalid literal for int() with base 10: 'x'. Try --debug for
more information.
Not specifying an offset works however:
./vol.py --profile=WinXPSP3x86 -f ../20130412-194645.raw hivedump
**************************************************
Hive \Device\HarddiskVolume1\Documents and Settings\ACS\NTUSER.DAT @
0xe1452b60
Last Written Key
------------------------ ---
2013-04-10 18:32:35+0000
CMILoadedHive-{D6ED518C-64C9-49F9-A016-8B8A7E2C051E}/AppEvents
...
Any hints on what I could be doing wrong? Thank you.
James
I recently used the latest version of winpmem in conjunction with Jeff
Bryner's nbdserver to acquire ram for a couple different systems in support
of a blog post I was writing. Acquisition of 1 gb memory from an XP 32 bit
vm via the network worked perfectly.
However, acquiring memory from a 64 bit Win 7 physical system with 12 GB
ram failed. It would start okay, but would freeze up and reboot the Win 7
system at 3.5 GB every time when being acquired using nbdserver via the
network. Using winpmem directly on the machine works successfully, but
fails on the network.
Any suggestions as to the problem? I can provide any data or follow up
testing if needed.
Ken