Memory allocation failure
-
I'm facing a situation where it appears from tracing in my application that an attempt to allocate memory via C++ new is failing, natually only at the customer, and there only if the allocation is large. Every failure instance I've seen involves sizes just beyond 4K (say 4100 - 4400 bytes). Otherwise no problem. Really scratching my head looking for a good next step in either diagnosis or adjustment / repair. AJ
-
I'm facing a situation where it appears from tracing in my application that an attempt to allocate memory via C++ new is failing, natually only at the customer, and there only if the allocation is large. Every failure instance I've seen involves sizes just beyond 4K (say 4100 - 4400 bytes). Otherwise no problem. Really scratching my head looking for a good next step in either diagnosis or adjustment / repair. AJ
-
-
I'm facing a situation where it appears from tracing in my application that an attempt to allocate memory via C++ new is failing, natually only at the customer, and there only if the allocation is large. Every failure instance I've seen involves sizes just beyond 4K (say 4100 - 4400 bytes). Otherwise no problem. Really scratching my head looking for a good next step in either diagnosis or adjustment / repair. AJ
4K is not what I would call large by any means. Are you new'ing a large number of objects? I can't recall ever seeing new fail. Maybe you've screwed the heap. A Debug Build with CRT Heap checking enabled may help. Neville Franks, Author of ED for Windows www.getsoft.com and coming soon: Surfulater www.surfulater.com
-
4K is not what I would call large by any means. Are you new'ing a large number of objects? I can't recall ever seeing new fail. Maybe you've screwed the heap. A Debug Build with CRT Heap checking enabled may help. Neville Franks, Author of ED for Windows www.getsoft.com and coming soon: Surfulater www.surfulater.com
I wouldn't think 4K would be bad either. It is a single new, which allocates varying amount depending on amount of data attached to a print request. I can't say that I've got enough instances yet to prove 4K is a hard limit. And I've never seen a new fail this way either. Is the heap screwed? May very well be, but not on our systems running with the same program levels and even print data. Help in figuring out why it's screwed is part of why I turned here. I'll see what I can do with your suggestion. Thanks AJ
-
I'm facing a situation where it appears from tracing in my application that an attempt to allocate memory via C++ new is failing, natually only at the customer, and there only if the allocation is large. Every failure instance I've seen involves sizes just beyond 4K (say 4100 - 4400 bytes). Otherwise no problem. Really scratching my head looking for a good next step in either diagnosis or adjustment / repair. AJ
The problem probably isn't where you think it is. Normaly when this type of error occurs it is because of a miscalulation which eventualy caused a memory overwrite that occured some time earlier in the program. Therefore, you need to look backwards at what happen before you made the allocation, it most likely occured in some other function usualy one that was called from the current function or one of the functions up the line that eventualy called the function where the failure. One way that might help track down the problem (that I have used) is to use TRACE, in all realated function (nomaly same class) to show every allocation/deletion and read/write invold. For an example of what I am talking about download a CDibData (@codeproject), it contains code for doing this type of tracking. I had the same problem when I wrote that code (and others) and did not bother to remove the tracing (yet!). Well, I hope this helps! Good luck! INTP
-
The problem probably isn't where you think it is. Normaly when this type of error occurs it is because of a miscalulation which eventualy caused a memory overwrite that occured some time earlier in the program. Therefore, you need to look backwards at what happen before you made the allocation, it most likely occured in some other function usualy one that was called from the current function or one of the functions up the line that eventualy called the function where the failure. One way that might help track down the problem (that I have used) is to use TRACE, in all realated function (nomaly same class) to show every allocation/deletion and read/write invold. For an example of what I am talking about download a CDibData (@codeproject), it contains code for doing this type of tracking. I had the same problem when I wrote that code (and others) and did not bother to remove the tracing (yet!). Well, I hope this helps! Good luck! INTP
Thats right that the problem is almost definately not where you think it is. I mean look at the definiton of new. void* __cdecl operator new(size_t nSize) { void* pResult; #ifdef _AFXDLL _PNH pfnNewHandler = _pfnUninitialized; #endif for (;;) { #if !defined(_AFX_NO_DEBUG_CRT) && defined(_DEBUG) pResult = _malloc_dbg(nSize, _NORMAL_BLOCK, NULL, 0); #else pResult = malloc(nSize); #endif if (pResult != NULL) return pResult; #ifdef _AFXDLL if (pfnNewHandler == _pfnUninitialized) { AFX_MODULE_THREAD_STATE* pState = AfxGetModuleThreadState(); pfnNewHandler = pState->m_pfnNewHandler; } if (pfnNewHandler == NULL || (*pfnNewHandler)(nSize) == 0) break; #else if (_afxNewHandler == NULL || (*_afxNewHandler)(nSize) == 0) break; #endif } return pResult; } Look at the for(;;) It should not fail. You have corrupted the heap. Get MMGR from http://www.fluidstudios.com/publications.html . It is an excellent class. Ensure that you carefully read the instructions [ especially the order of includes]. Use it on Debug and Release builds. Check that you have not used new and free or similar. Check for resource leaks. Regards, axe