Searching in RAM with C++ is too slow?

Post Reply
Johannes
Posts: 6
Joined: 15 Apr 2011 15:35

Searching in RAM with C++ is too slow?

Post by Johannes » 15 Apr 2011 15:49

Hello,
HxD-Editor is really great! :) I am trying to make a little program in C++, where i am searching for some strings/numbers in the RAM of a running program.

I googled much around and was trying many different things. But it is still very slow compared to HxD. I guess it's because i copy the RAM instead of searching directly in it.

Can i do more significantly changes or do i need other functions? In which language is HxD programmed?

Here is my code:

Code: Select all

#include <windows.h>
#include <iostream>
#include <limits>	   // new line
using namespace std;

int Memory(HWND MyWindow, int long Address, int Value, bool _w)
{
	DWORD PROC_ID;
	HANDLE PROC_HANDLE;

	GetWindowThreadProcessId(MyWindow, &PROC_ID);
	PROC_HANDLE = OpenProcess(PROCESS_ALL_ACCESS, false, PROC_ID);
	if(_w)
		WriteProcessMemory(PROC_HANDLE, (LPVOID)Address, &Value, sizeof(long int), NULL);
	else
		ReadProcessMemory(PROC_HANDLE, (LPVOID)Address, &Value, sizeof(long int), NULL);
	CloseHandle(PROC_HANDLE);
	return Value;
}

int main()
{
	SetConsoleTitle("Memory Search");
	register unsigned long long int x;	// registering the most used variables
	/*** maximizing the case ***/
	LPVOID MyLPVOID;
   
	char Caption[1000] = "n1.txt - Editor";
//	cout << "Enter the caption of the window: ";
//	cin.getline(Caption, 1000);
	HWND Test = FindWindow(NULL, Caption);
	if (!Test){
	   cout << "\nCannot find window \"" << Caption << "\"!"<<endl;
	   system("pause");
	   return 1;
	}
	LPVOID Address = (void*)0x00010000;
//	cout << "\nEnter the address to start at: ";
//	cin >> Address;

	register unsigned long long int Limit = ((numeric_limits<unsigned long long int>::max())/2)-((long int)Address+1);
		/*** IMPORTANT ***/
   
	int WhatToFind;
	cout << "\nWhat number are you trying to find: ";
	cin >> WhatToFind;
   
	for (x=0;x<Limit; x++){   // and now this
	   int AtAddress = Memory(Test, ((long int)Address + x), 0, false);
	   if (AtAddress == WhatToFind){
		   cout << "\n\nFound it at address: " << (LPVOID)((long int)Address + x);
		   break;  // when found !
	   }
	}
	cout << endl;
	system("pause");
	return 0;
}
Greetings
Johannes

Maël
Site Admin
Posts: 936
Joined: 12 Mar 2005 14:15

Re: Searching in RAM with C++ is too slow?

Post by Maël » 16 Apr 2011 18:08

The reason most likely is that you call ReadProcessMemory for every byte. In general you should always read chunks of data in one go into a buffer(=array of byte), and then search in that buffer. The same applies when you access files, the network or any other data source. Reading in chunks reduces the amount of function calls, which is especially noticeable if they are slow, as with a function reading from a network like the Internet for example.

So, read in chunks of maybe 1024 Byte or larger (you have to experiment 64KB or 128 KB etc. might also be good sizes) in one go, and then search in this buffer. Reading byte by byte introduces a lot of overhead.

The programming language is unlikely to affect it much, I am using Delphi which is Object Pascal and a compiled language like C++.

If you want to optimize more there are special search algorithms like Boyer-Moore-Horspool. But try first to get the buffer approach right, all the other algorithms expect you to have the data ready in a buffer.

Johannes
Posts: 6
Joined: 15 Apr 2011 15:35

Re: Searching in RAM with C++ is too slow?

Post by Johannes » 16 Apr 2011 19:09

Thank you.
After i posted this thread I realized/expected that i need to use buffers, but i wasn't sure if i overlook something else and wanted to make sure before i put more work in there.
Now i know it:)

Is there an easy solution if the searched word is cut into 2 buffers?

Eg:
searchedWord = "4567"
buffer1 = "12345"
buffer2 = "67890"

Should i copy the first (sizeOf(searchedWord)-1) bytes of buffer2 at the end of buffer1?
=>
buffer3 = "12345678"

Or do i miss the obvious point again?:P

Which algorithm do you use in HxD?

[Edit:]
Since i want to have all findings it is probably best to copy all buffers into one big buffer and then use the algorithm on that big buffer. (If it is possible to use such a big buffer...)

Maël
Site Admin
Posts: 936
Joined: 12 Mar 2005 14:15

Re: Searching in RAM with C++ is too slow?

Post by Maël » 16 Apr 2011 20:29

Here is a slightly modified version of the comment I have in my sourcecode about that issue:

Code: Select all

If we exceeded/reached the buffer end while comparing with the search-pattern we might have found a byte sequence that matches the search-pattern, but which isn't fully "visible" because it overlaps the buffer end. So make sure we get that sequence as a whole, when we read in the next chunk of the stream, and rewind accordingly.
stream = memory you read with readprocessmemory. In HxD everything is modeled as a stream (disks, file, RAM). But in case of RAM it isn't that easy because there are inaccessible regions too (non-allocated memory or non-readable memory), so you have to skip those too, when you reach them. I wont go into detail about all those things though, it would take quite some time.

@big buffer: You could try that too, depending on the amount of RAM your app uses it could be rather large though.

To solve the problem of the non-contigous "RAM stream", you could write a wrapper function for ReadProcessMemory that returns a buffer filled all with zeros in case a certain portion isn't readable. That's not exactly perfect because it will find zero sequences when in fact there is no data at all, but it should work for the cases you are interested in I guess, and it would be much simpler.

Another tip: allocate the buffer once before searching, and reuse it for every readprocessmemory call. That will be faster than reallocating everytime.

@algorithm: I am using boyer-moore-horspool and a lot of adaptions so it works well with different kinds of streams. There are even better search algorithms, but since the data-read functions (Readprocessmemory, read from disk/file etc.) are comparatively slow, it wouldn't make much of a difference.

I hope this helps.

Johannes
Posts: 6
Joined: 15 Apr 2011 15:35

Re: Searching in RAM with C++ is too slow?

Post by Johannes » 16 Apr 2011 22:19

@big buffer: You could try that too, depending on the amount of RAM your app uses it could be rather large though.
That's true. Again i got this point soon after i posted it :D The programm uses 1-2GB so its really a bad idea...

About that inaccesible memory, for now i just hope i don't encounter such regions, but i will see. :mrgreen:

i hope this helps.
You do indeed help me very much!
Thanks again :)

Johannes
Posts: 6
Joined: 15 Apr 2011 15:35

Re: Searching in RAM with C++ is too slow?

Post by Johannes » 17 Apr 2011 12:54

About that inaccesible memory, for now i just hope i don't encounter such regions, but i will see. :mrgreen:
I guess i was a little too optimistic/naive... :oops:

So i have to work around those inaccessible regions. Do they have patterns like where they usually start and where they end? (Or maybe any further literature?)

Am i right that HxD reads in 0x10000 blocks and when there are bad regions it switchs to 0x1000 blocks for that part?

I found a maphack-tutorial for warcraft3 on http://www.skillhackers.info/mhtut

Code: Select all

int main()

{

    //Find wc3 windows

    HWND hwar3=::FindWindow(NULL,"Warcraft III")

 

    HANDLE hcurrent=GetCurrentProcess();

    HANDLE hToken;

    BOOL bret=OpenProcessToken(hcurrent,40,&hToken);

    LUID luid;

    bret=LookupPrivilegeValue(NULL,"SeDebugPrivilege",&luid);

    TOKEN_PRIVILEGES NewState,PreviousState;

    DWORD ReturnLength;

    NewState.PrivilegeCount =1;

    NewState.Privileges[0].Luid =luid;

    NewState.Privileges[0].Attributes=2;

    bret=AdjustTokenPrivileges(hToken,FALSE,&NewState,28,&PreviousState,&ReturnLength);

 

    DWORD PID, TID;

    TID = ::GetWindowThreadProcessId (hwar3, &PID);

    //Open wc3 process

    HANDLE hopen=OpenProcess( PROCESS_ALL_ACCESS|PROCESS_TERMINATE|PROCESS_VM_OPERATION|PROCESS_VM_READ|

                  PROCESS_VM_WRITE,FALSE,PID);

 

 

    //Write memory

    //6F2A08B1     66:BF 0100     MOV DI,0FF

    DWORD data=0xBF;

    bret=WriteProcessMemory(hopen,(LPVOID)0x6F2A08B2,&data,1,0);

    data=0x0F;

    bret=WriteProcessMemory(hopen,(LPVOID)0x6F2A08B3,&data,1,0);

    data=0x00;

    bret=WriteProcessMemory(hopen,(LPVOID)0x6F2A08B4,&data,1,0);

 

    //Close handle

    bret=CloseHandle(hopen); 

    return 0;

}
And at this part he activates "SeDebugPrivilege" for the running process.

Code: Select all

bret=LookupPrivilegeValue(NULL,"SeDebugPrivilege",&luid);

    TOKEN_PRIVILEGES NewState,PreviousState;

    DWORD ReturnLength;

    NewState.PrivilegeCount =1;

    NewState.Privileges[0].Luid =luid;

    NewState.Privileges[0].Attributes=2;

    bret=AdjustTokenPrivileges(hToken,FALSE,&NewState,28,&PreviousState,&ReturnLength);
Are that SeDebugPrivileges only needed to be able to write in the other process or is it possible that you can access (more) inaccessible regions?
(I don't have WC3 to test it.) :(
(You maybe don't have either but maybe you don't need to because you know it already) ;)

Maël
Site Admin
Posts: 936
Joined: 12 Mar 2005 14:15

Re: Searching in RAM with C++ is too slow?

Post by Maël » 17 Apr 2011 16:16

The simplest way is probably reading page by page. You can get the pagesize using GetSystemInfo. The documentation says "The page size and the granularity of page protection and commitment. This is the page size used by the VirtualAlloc function.". So that's exactly what you need because you want to be able to skip unaccessible sections, which depends on the protection, and the smallest unit of an unaccessible section is a page.

Then you can do something like this (pseudocode):

Code: Select all

for (Address = 0; Address <= MaxAddress - PageSize; Address += PageSize)
{
  if not ReadProcessMemory(Process, Address, Buffer) then
    FillWithZeros(Buffer);
  Search(Buffer); // and handle the search-pattern overlaps buffer case as mentioned in earlier post
}
It might be a bit slow, I haven't tested. Alternatively, you can use VirtualQueryEx to get consecutive pages that are inaccessible, i.e. larger chunks you can skip at one go.

It's correct that you need the debug privilege, most programs have it set by default, especially when you are running as administrator. HxD itself doesn't try to acquire that privilege if it isn't there already.

I also suggest you read a bit the windows api documentation, I gave you some pointers now, it will give you a much better understanding, then you can experiment further.

Johannes
Posts: 6
Joined: 15 Apr 2011 15:35

Re: Searching in RAM with C++ is too slow?

Post by Johannes » 17 Apr 2011 16:45

I also suggest you read a bit the windows api documentation
Do you mean pages like this or something different?
http://msdn.microsoft.com/en-us/library ... 85%29.aspx

i read there when i have a function but just reading there in general i feel kind of lost :/

I will try to do it like this:

Image

Something different:
http://en.wikipedia.org/wiki/Boyer%E2%8 ... _algorithm
I tried that example code and i think there was a mistake, i corrected it on wiki. I hope i didn't fail!?
I was that one: 14:20, 17 April 2011
After me an other guy undid my change and did a second change...

Maël
Site Admin
Posts: 936
Joined: 12 Mar 2005 14:15

Re: Searching in RAM with C++ is too slow?

Post by Maël » 17 Apr 2011 17:30

I also suggest you read a bit the windows api documentation
Do you mean pages like this or something different?
http://msdn.microsoft.com/en-us/library ... 85%29.aspx
Yes, pages like these. Just try to translate the pseudocode I gave you into a real c++ program.
I read there when i have a function but just reading there in general i feel kind of lost :/
Don't think too much, just experiment and make tests to see if your understanding matches the explanation. It's normal that this takes time to get all working out well. I needed a long time to make it work, and understand the details, too. Think of specific examples, examples for corner cases (like the one where the search-pattern potentially overlaps the buffer) and think about an algorithm to solve each example. Then unify them into a general algorithm. First start with the simple cases. Trying everything at once is just going to be frustrating because you don't know where the error comes from if there is one (which is very likely to happen).
Something different:
http://en.wikipedia.org/wiki/Boyer%E2%8 ... _algorithm
I tried that example code and i think there was a mistake, i corrected it on wiki. I hope i didn't fail!?
"last" wasn't assigned properly before as you have noticed but the algorithm was expecting it to be last +1, so you would have to adjust for that. (but I haven't time to think it totally through, my algorithm is written differently).

I suggest you first try to get a simple search working, the performance really doesn't matter. Only optimize if you really need it later. First get it working in a simple way (as mentioned in the pseudocode), using the BMH just makes it more complex and wont make such a big difference anyway (as mentioned above).

Instead of BMH you can use a simple search. See these links for example
http://answers.yahoo.com/question/index ... 701AAJi7jq
or this (it is hard coded for 4 bytes, but it should be easy to adapt to more bytes, and instead of using fread you would use your readfrommemory-wrapper, that returns null when the memory region is inaccessible)
http://stackoverflow.com/questions/1541 ... 46#1541846

I think with that information and searching a bit around and asking in a C++ forum you should be able to get what you want.

Johannes
Posts: 6
Joined: 15 Apr 2011 15:35

Re: Searching in RAM with C++ is too slow?

Post by Johannes » 19 Apr 2011 14:58

I finished my little program. It runs in about the same time as HxD when it has the focus (2-3secs) :)
When the other program(which RAM is read) has the focus it needs about 6-7 seconds. I can't test that with HxD but i would expect that its about the same speed. Although i only read until 0x6FFFFFFF (which is about 1.8GB).
Is there an easy way to determine where to stop?

When i have got some spare time i will rewrite my code a bit (so it's not as ugly as it is now) and post it here if you want me to. :)

I read now in chunks of 0x50000 and onl 0x10000 when i am in "bad Regions".

Thx again for your big help,
greetings
Johannes

Maël
Site Admin
Posts: 936
Joined: 12 Mar 2005 14:15

Re: Searching in RAM with C++ is too slow?

Post by Maël » 20 Apr 2011 21:46

Johannes wrote:I finished my little program. It runs in about the same time as HxD when it has the focus (2-3secs) :)
Congrats :)
Johannes wrote: Is there an easy way to determine where to stop?
See GetSystemInfo. [edit:] Here is a good example: http://www.codeguru.com/forum/showthread.php?p=1806848
[Edit2:] Here is another link, which uses VirtualQueryEx to iterate over all possible memory regions (as HxD does, too): http://www.codeproject.com/KB/threads/MDumpAll.aspx

And yes please, go ahead and post your source code, it could be useful for other people.

Post Reply