Restricting rights to delete a dynamic array ? [modified]
-
Hi, thanks for the suggestions. I've actually decided not to change what i've done so i'm not seeking further help but I'll try to explain the situation better so you understand the conclusion I've come to since it's a bit more complicated than it may seem. I'm sure the implementation of the container is ok and user friendly. The only issue is that memory for a member array shouldn't be deallocated by a user despite them having the permission. Although this shouldn't be a problem because someone probably wouldn't assume they should deallocate memory via a const pointer that's a member of a container class (correct me if i'm wrong) and anyone who's used a map or vector container should know it's not up to them to manage the memory for a map or vector's member variables. The container I've implemented is a specialised map and like a map it manages it's own memory. Since the array is declared protected I'm sure users would have the sense to check whether it's safe to manually deallocate the memory it uses and the function to retrieve the array has a const specifier in it's declaration which means memory can't be allocated for it, which should indicate to users that they shouldn't delete the array themselves I'm assuming. What would totally solve the issue is if the C++ language (or compilers) made it so a dynamic array declared const couldn't be deleted. I don't see why it's possible to deallocate memory pointed to by a const dynamic array variable using delete while it's not possible to allocate memory for it using new. Ah because the address of a const pointer can't change but memory for a given address can be deallocated. Basically the container I've made is a map that stores elements in a contigious array where it's possible to specify the order that they are stored in. I say array but it's actually a block of memory allocated with operator new and never new [] or new. The memory is always freed with operator delete. The reason for managing memory allocation for the "array" this way is so the calling of elements constructors / destructors is more controlled i.e. constructor is not called when inserting element, it's only called when inserting a new element and destructors are not called when the array is resized which makes it a very useful kind of container. A vector wouldn't work because being able to retrieve a dynamic array manipulated by the container is a key reason why this container would be useful compared to standard ones. When I want to store elements mapped
Hi! I tried to understand the problem you describe, this came in mind... probably it's the same as my first post but a little bit more detailed. 1) use data containers (
STL
or your own implementation) when ever possible, avoid manual memory allocations withnew
anddelete
. 2) alternatively rely onRAII
(Resource Acquisition Is Initialization) that memory is automatically deallocated when an object goes out of scope (smart pointers from Boost or your own implementation). Such a smart pointer is responsible for encapsulating dynamically allocated memory and knows itself when to deallocate the memory again. You can even prevent application code from deleting a pointer that is being managed with a shared pointer, super fancy. :) This is how I avoid situations where a developer wonders "do I own that pointer? do I need to delete it?". With the options shown above it is easier to keep control of ownership and lifetime in my opinion. I rarely callnew
anddelete
in application code. When designing APIs I often use a buffer/container object that a method can fill up or the class has methods/accessors to retrieve individual information. All of the above can be enforced with a C++ coding standard[^]. I suggest to have a look at Boost's shared pointer techniques[^] and maybe generally think about memory handling strategies from a overall design point of view. Hope it helps. /MWebchat in Europe :java: (only 4K)
-
Hi! I tried to understand the problem you describe, this came in mind... probably it's the same as my first post but a little bit more detailed. 1) use data containers (
STL
or your own implementation) when ever possible, avoid manual memory allocations withnew
anddelete
. 2) alternatively rely onRAII
(Resource Acquisition Is Initialization) that memory is automatically deallocated when an object goes out of scope (smart pointers from Boost or your own implementation). Such a smart pointer is responsible for encapsulating dynamically allocated memory and knows itself when to deallocate the memory again. You can even prevent application code from deleting a pointer that is being managed with a shared pointer, super fancy. :) This is how I avoid situations where a developer wonders "do I own that pointer? do I need to delete it?". With the options shown above it is easier to keep control of ownership and lifetime in my opinion. I rarely callnew
anddelete
in application code. When designing APIs I often use a buffer/container object that a method can fill up or the class has methods/accessors to retrieve individual information. All of the above can be enforced with a C++ coding standard[^]. I suggest to have a look at Boost's shared pointer techniques[^] and maybe generally think about memory handling strategies from a overall design point of view. Hope it helps. /MWebchat in Europe :java: (only 4K)
thanks :) that was helpful. i'll check out boost and STL. The container I've made is quite specialized though and works for the poject I'm doing, so I might stick with it but if there's a better way I'll try and find that before continuing with the project I'm doing.
-
Hi! I tried to understand the problem you describe, this came in mind... probably it's the same as my first post but a little bit more detailed. 1) use data containers (
STL
or your own implementation) when ever possible, avoid manual memory allocations withnew
anddelete
. 2) alternatively rely onRAII
(Resource Acquisition Is Initialization) that memory is automatically deallocated when an object goes out of scope (smart pointers from Boost or your own implementation). Such a smart pointer is responsible for encapsulating dynamically allocated memory and knows itself when to deallocate the memory again. You can even prevent application code from deleting a pointer that is being managed with a shared pointer, super fancy. :) This is how I avoid situations where a developer wonders "do I own that pointer? do I need to delete it?". With the options shown above it is easier to keep control of ownership and lifetime in my opinion. I rarely callnew
anddelete
in application code. When designing APIs I often use a buffer/container object that a method can fill up or the class has methods/accessors to retrieve individual information. All of the above can be enforced with a C++ coding standard[^]. I suggest to have a look at Boost's shared pointer techniques[^] and maybe generally think about memory handling strategies from a overall design point of view. Hope it helps. /MWebchat in Europe :java: (only 4K)
I forgot STL meant standard template library. I had a look at some implementations of smart pointers and found some of the techniques interesting but tending on ugly then I thought of a pretty ideal simple solution to the problem. This class will work to allow elements in a dynamic array to be modified while preventing the user from allocating / deallocating memory for the array itself :
template
class SpecialMap
{
public:template class InternalArray { protected: friend SpecialMap; TValue\* pElements; public: TValue & operator\[\](UINT nIndex)const {return pElements\[nIndex\];} };
So a user knows they shouldn't manage the memory for the array. And it's still possible to use the array in functions requiring a void* as I mentioned for example by using &internalArray[0]. For what my program's supposed to do I think it would be a great deal more convenient using my custom container than standard ones but I will try to stick to the standards when possible.
-
I forgot STL meant standard template library. I had a look at some implementations of smart pointers and found some of the techniques interesting but tending on ugly then I thought of a pretty ideal simple solution to the problem. This class will work to allow elements in a dynamic array to be modified while preventing the user from allocating / deallocating memory for the array itself :
template
class SpecialMap
{
public:template class InternalArray { protected: friend SpecialMap; TValue\* pElements; public: TValue & operator\[\](UINT nIndex)const {return pElements\[nIndex\];} };
So a user knows they shouldn't manage the memory for the array. And it's still possible to use the array in functions requiring a void* as I mentioned for example by using &internalArray[0]. For what my program's supposed to do I think it would be a great deal more convenient using my custom container than standard ones but I will try to stick to the standards when possible.
What you say about smart pointers looking ugly. Absolutely true, template code is not easy to read and looks complicated at first because it is more abstract, on the other side using it is very very easy. Once you get used to the (object oriented) idea of memory being handled by smart containers/pointers, you don't miss old-school manual memory allocation. After all it is 2010 now! ;) From looking at your code... why don't you use a class inherited from std::vector<TValue> and make some methods you don't want to expose (e.g. resize and clear) protected? /M
Webchat in Europe :java: (only 4K)
-
What you say about smart pointers looking ugly. Absolutely true, template code is not easy to read and looks complicated at first because it is more abstract, on the other side using it is very very easy. Once you get used to the (object oriented) idea of memory being handled by smart containers/pointers, you don't miss old-school manual memory allocation. After all it is 2010 now! ;) From looking at your code... why don't you use a class inherited from std::vector<TValue> and make some methods you don't want to expose (e.g. resize and clear) protected? /M
Webchat in Europe :java: (only 4K)
I was going to have a look at the boost shared / smart pointers that have definately been well implememented but I downloaded boost and my computer started playing up, well the file menu in the boost zip folder stopped working, and it was fine in other zip folders. No virus's detected. So I'll stay away from smart pointers at the moment cause I don't need them (although technically maybe the InternalArray class is a kind of smart pointer cause it is used like a pointer and means delete can't be called on it). I don't want to use a vector because with full optimizations vector lookups are about 8200 times slower than dynamic arrays meaning you could do 8200 * 1000000 accesses to elements using a dynamic array in the time it takes to do 1000000 using a vector, which should make a fair difference in the program i'm writing and it's not only the resize and clear functions that call element destructors, even inserting an element will call destructors for all the other elements in the vector, or at least the ones that change position in the internal array. I don't see why a standard container would be implemented so destructors of elements are called when the array is resized, but of course I know why it works like that just not why someone decided to implement a vector / map etc. like that. well, I'm happy with the code I've written thanks for the advice / suggestions :)
-
I was going to have a look at the boost shared / smart pointers that have definately been well implememented but I downloaded boost and my computer started playing up, well the file menu in the boost zip folder stopped working, and it was fine in other zip folders. No virus's detected. So I'll stay away from smart pointers at the moment cause I don't need them (although technically maybe the InternalArray class is a kind of smart pointer cause it is used like a pointer and means delete can't be called on it). I don't want to use a vector because with full optimizations vector lookups are about 8200 times slower than dynamic arrays meaning you could do 8200 * 1000000 accesses to elements using a dynamic array in the time it takes to do 1000000 using a vector, which should make a fair difference in the program i'm writing and it's not only the resize and clear functions that call element destructors, even inserting an element will call destructors for all the other elements in the vector, or at least the ones that change position in the internal array. I don't see why a standard container would be implemented so destructors of elements are called when the array is resized, but of course I know why it works like that just not why someone decided to implement a vector / map etc. like that. well, I'm happy with the code I've written thanks for the advice / suggestions :)
Actually, I was wrong about optimisation being a problem, the vector's only slower when using the overloaded subscript operator to access elements but it's possible to make a pointer to the vector's internal array by getting the address of the first element so elements can be accessed using the pointer when faster accesses are needed. But this would mean a pointer would have to be made that holds the address which could be a bit crude to program.
-
I was going to have a look at the boost shared / smart pointers that have definately been well implememented but I downloaded boost and my computer started playing up, well the file menu in the boost zip folder stopped working, and it was fine in other zip folders. No virus's detected. So I'll stay away from smart pointers at the moment cause I don't need them (although technically maybe the InternalArray class is a kind of smart pointer cause it is used like a pointer and means delete can't be called on it). I don't want to use a vector because with full optimizations vector lookups are about 8200 times slower than dynamic arrays meaning you could do 8200 * 1000000 accesses to elements using a dynamic array in the time it takes to do 1000000 using a vector, which should make a fair difference in the program i'm writing and it's not only the resize and clear functions that call element destructors, even inserting an element will call destructors for all the other elements in the vector, or at least the ones that change position in the internal array. I don't see why a standard container would be implemented so destructors of elements are called when the array is resized, but of course I know why it works like that just not why someone decided to implement a vector / map etc. like that. well, I'm happy with the code I've written thanks for the advice / suggestions :)
doug25 wrote:
I don't want to use a vector because with full optimizations vector lookups are about 8200 times slower than dynamic arrays
Strange, I can not confirm this behaviour in my software. I don't know what you mean by "lookups", but basic read/write access with vectors should be really fast. Here is a pseudo code to read a file into memory:
CFileStream file; if(file.Open(sFileName, CFileStream::STREAM\_READ)) { std::vector<unsigned char> buffer; buffer.reserve(file.GetSize()); file.Read(&buffer\[0\], file.GetSize()); }
doug25 wrote:
even inserting an element will call destructors for all the other elements in the vector
This is by design. Vectors use a single continuous block of memory, so when you insert one element a couple of elements need to be relocated to make space in the middle. If you take for example a vector<unsigned char> this gives you pretty much the same memory access as using a block of dynamically allocated bytes. There are other STL containers with different memory layouts and runtime behaviours (lists, maps, etc). So long, M
Webchat in Europe :java: (only 4K)
-
doug25 wrote:
I don't want to use a vector because with full optimizations vector lookups are about 8200 times slower than dynamic arrays
Strange, I can not confirm this behaviour in my software. I don't know what you mean by "lookups", but basic read/write access with vectors should be really fast. Here is a pseudo code to read a file into memory:
CFileStream file; if(file.Open(sFileName, CFileStream::STREAM\_READ)) { std::vector<unsigned char> buffer; buffer.reserve(file.GetSize()); file.Read(&buffer\[0\], file.GetSize()); }
doug25 wrote:
even inserting an element will call destructors for all the other elements in the vector
This is by design. Vectors use a single continuous block of memory, so when you insert one element a couple of elements need to be relocated to make space in the middle. If you take for example a vector<unsigned char> this gives you pretty much the same memory access as using a block of dynamically allocated bytes. There are other STL containers with different memory layouts and runtime behaviours (lists, maps, etc). So long, M
Webchat in Europe :java: (only 4K)
By "lookups" I just meant accessing an element by using the subscript operator e.g. vec[nIndex]. I took a more accurate reading. For accessing a million elements, using a dynamic array e.g. value = pArray[i] is 8455 times faster (on my pc) than using the overloaded subscript operator on a vector e.g. value = vec[i] where ints are used as elements. I'm sorted though cause I've made my own array handling functions, so I can safely use arrays. The functions make use of operator new, operator delete and placement new when required and this means I have control over how constructors / destructors are called when I need to resize array's etc.
-
By "lookups" I just meant accessing an element by using the subscript operator e.g. vec[nIndex]. I took a more accurate reading. For accessing a million elements, using a dynamic array e.g. value = pArray[i] is 8455 times faster (on my pc) than using the overloaded subscript operator on a vector e.g. value = vec[i] where ints are used as elements. I'm sorted though cause I've made my own array handling functions, so I can safely use arrays. The functions make use of operator new, operator delete and placement new when required and this means I have control over how constructors / destructors are called when I need to resize array's etc.
doug25 wrote:
8455 times faster (on my pc)
STL is a heavily optimized library. If you have found such a difference, your benchmark is most likely doing something wrong. Make sure to compile in release mode, not debug. The performance for accessing elements is the same, whether you use std::vector or not. Cheers /M
Webchat in Europe :java: (only 4K)
-
doug25 wrote:
8455 times faster (on my pc)
STL is a heavily optimized library. If you have found such a difference, your benchmark is most likely doing something wrong. Make sure to compile in release mode, not debug. The performance for accessing elements is the same, whether you use std::vector or not. Cheers /M
Webchat in Europe :java: (only 4K)
There's virtually no performance difference between a vector and dynamic array most of the time, I agree. I'm certain my measurements are correct but I decided to take a few more to get a better idea of the exact difference. The test program was compiled in release mode with full optimizations, Maximize Speed, Favor fast code, no debug info... The results show that reading values stored in a dynamic array can be anywhere from 1 times faster and above compared to reading values from a vector in the normal way i.e. using the overloaded subscript operator. When fewer accesses are made, there will be virtually no difference in the time taken by a vector and dynamic array i.e. dynamic array is 1... times faster than vector, but if a million accesses are done at once, array is about 8000 times faster. Not quite sure why, but that's how it works. Certain 3d computations for example might benefit from the extra performance that using dynamic arrays directly can provide. When I started using C++, I came accross a program that could help manipulate pixels on the screen faster than using e.g. windows GDI, I converted the vectors used to dynamic arrays and got a massive difference in performance on a fast computer (that was two years ago, so it's still a "fast computer"). But that was unusual, so probably vectors are almost always the best way to go.
modified on Wednesday, January 6, 2010 11:48 PM
-
There's virtually no performance difference between a vector and dynamic array most of the time, I agree. I'm certain my measurements are correct but I decided to take a few more to get a better idea of the exact difference. The test program was compiled in release mode with full optimizations, Maximize Speed, Favor fast code, no debug info... The results show that reading values stored in a dynamic array can be anywhere from 1 times faster and above compared to reading values from a vector in the normal way i.e. using the overloaded subscript operator. When fewer accesses are made, there will be virtually no difference in the time taken by a vector and dynamic array i.e. dynamic array is 1... times faster than vector, but if a million accesses are done at once, array is about 8000 times faster. Not quite sure why, but that's how it works. Certain 3d computations for example might benefit from the extra performance that using dynamic arrays directly can provide. When I started using C++, I came accross a program that could help manipulate pixels on the screen faster than using e.g. windows GDI, I converted the vectors used to dynamic arrays and got a massive difference in performance on a fast computer (that was two years ago, so it's still a "fast computer"). But that was unusual, so probably vectors are almost always the best way to go.
modified on Wednesday, January 6, 2010 11:48 PM
doug25 wrote:
When fewer accesses are made, there will be virtually no difference in the time taken by a vector and dynamic array i.e. dynamic array is 1... times faster than vector, but if a million accesses are done at once, array is about 8000 times faster. Not quite sure why, but that's how it works.
I just want to add, that STL performance was heavily studied over the past years and I have not come across anyone with your benchmark results. Happy testing! :)
Webchat in Europe :java: (only 4K)
-
doug25 wrote:
When fewer accesses are made, there will be virtually no difference in the time taken by a vector and dynamic array i.e. dynamic array is 1... times faster than vector, but if a million accesses are done at once, array is about 8000 times faster. Not quite sure why, but that's how it works.
I just want to add, that STL performance was heavily studied over the past years and I have not come across anyone with your benchmark results. Happy testing! :)
Webchat in Europe :java: (only 4K)