[SOLVED]Problems when trying to abstract D3D11 types in a class with C++
-
Hi all, I am a hobbyist developer and I am recycling myself with the intention of learning C++ and D3D11. To give you an idea, I have been using for more than a decade a D3D9 engine with a programming language very similar to C. After doing several D3D11 tutorials, I think I understand fairly well where the shots are going. The thing is that these tutorials lack an architecture designed to be scaled and they all end up in a function that does everything at once. Reading around, I have found a small explanation of how to combine in a simple way all the buffers that have to be associated to the pipeline to end up drawing on the screen, but I can't make it work. The system is based on a class called 'Bindable' that contains a virtual function to associate the buffers to the pipeline and that is configured to be 'friendly' to the class that initializes the graphics (Graphics) to be able to access its private members.
class Bindable { public: virtual void Bind(Graphics& \_gfx) noexcept = 0; virtual ~Bindable() = default; protected: static ID3D11DeviceContext\* GetContext(Graphics& \_gfx) noexcept; static ID3D11Device\* GetDevice(Graphics& \_gfx) noexcept; };
I expand this class for each type of buffer with its specific constructor and dinder. For example, the mesh index buffer, which contains a pointer to the buffer and the number of indexes, looks like this:
class IndexBuffer : public Bindable { public: IndexBuffer(Graphics& \_gfx, const std::vector& \_indices); void Bind(Graphics& \_gfx) noexcept override; UINT GetCount() const noexcept; private: UINT m\_count; Microsoft::WRL::ComPtr mp\_buffer; };
The implementation is very simple:
IndexBuffer::IndexBuffer(Graphics& _gfx, const std::vector& _indices)
:
m_count((UINT)_indices.size())
{
D3D11_BUFFER_DESC _ibd = {};
_ibd.BindFlags = D3D11_BIND_INDEX_BUFFER;
_ibd.Usage = D3D11_USAGE_DEFAULT;
_ibd.CPUAccessFlags = 0u;
_ibd.MiscFlags = 0u;
_ibd.ByteWidth = UINT(m_count * sizeof(unsigned short));
_ibd.StructureByteStride = sizeof(unsigned short);D3D11\_SUBRESOURCE\_DATA \_isd = {}; \_isd.pSysMem = \_indices.data(); GetDevice(\_gfx)->CreateBuffer( &\_ibd, &\_isd, &mp\_buffer ) } void
-
Hi all, I am a hobbyist developer and I am recycling myself with the intention of learning C++ and D3D11. To give you an idea, I have been using for more than a decade a D3D9 engine with a programming language very similar to C. After doing several D3D11 tutorials, I think I understand fairly well where the shots are going. The thing is that these tutorials lack an architecture designed to be scaled and they all end up in a function that does everything at once. Reading around, I have found a small explanation of how to combine in a simple way all the buffers that have to be associated to the pipeline to end up drawing on the screen, but I can't make it work. The system is based on a class called 'Bindable' that contains a virtual function to associate the buffers to the pipeline and that is configured to be 'friendly' to the class that initializes the graphics (Graphics) to be able to access its private members.
class Bindable { public: virtual void Bind(Graphics& \_gfx) noexcept = 0; virtual ~Bindable() = default; protected: static ID3D11DeviceContext\* GetContext(Graphics& \_gfx) noexcept; static ID3D11Device\* GetDevice(Graphics& \_gfx) noexcept; };
I expand this class for each type of buffer with its specific constructor and dinder. For example, the mesh index buffer, which contains a pointer to the buffer and the number of indexes, looks like this:
class IndexBuffer : public Bindable { public: IndexBuffer(Graphics& \_gfx, const std::vector& \_indices); void Bind(Graphics& \_gfx) noexcept override; UINT GetCount() const noexcept; private: UINT m\_count; Microsoft::WRL::ComPtr mp\_buffer; };
The implementation is very simple:
IndexBuffer::IndexBuffer(Graphics& _gfx, const std::vector& _indices)
:
m_count((UINT)_indices.size())
{
D3D11_BUFFER_DESC _ibd = {};
_ibd.BindFlags = D3D11_BIND_INDEX_BUFFER;
_ibd.Usage = D3D11_USAGE_DEFAULT;
_ibd.CPUAccessFlags = 0u;
_ibd.MiscFlags = 0u;
_ibd.ByteWidth = UINT(m_count * sizeof(unsigned short));
_ibd.StructureByteStride = sizeof(unsigned short);D3D11\_SUBRESOURCE\_DATA \_isd = {}; \_isd.pSysMem = \_indices.data(); GetDevice(\_gfx)->CreateBuffer( &\_ibd, &\_isd, &mp\_buffer ) } void
You should really check the return values when creating buffers - you might find the graphics card doesn't want to create a 16-bit buffer or something annoying like that. Direct3D11 will happily run though a load of code that does nothing because something at the start wasn't set up properly. I would step through in the debugger and check that everything is being assigned, and especially that the index buffer is being set to a real value and not nullptr.
-
You should really check the return values when creating buffers - you might find the graphics card doesn't want to create a 16-bit buffer or something annoying like that. Direct3D11 will happily run though a load of code that does nothing because something at the start wasn't set up properly. I would step through in the debugger and check that everything is being assigned, and especially that the index buffer is being set to a real value and not nullptr.
Thank you for your answer, Graham. I actually do check all the returns, I just squeezed down the code to the minimum. Even I check the data from gdiplus.h, that's where the warning come from. Every D3D11 function call is wrapped into an exception thrower. I already have a straight function where the buffers are created and released every frame that works. I am trying to replace all those creations, step by step, by preloaded buffers, but with no luck. To be clearer, I am trying to sustitute index and vertex buffers creations by the corresponding classes from a function that already draws a rotating cube. The mesh description is the very same. ```
// Vertex buffer description wrl::ComPtr \_vertexBuf; D3D11\_BUFFER\_DESC \_vbd = {}; \_vbd.BindFlags = D3D11\_BIND\_VERTEX\_BUFFER; \_vbd.Usage = D3D11\_USAGE\_DEFAULT; \_vbd.CPUAccessFlags = 0u; \_vbd.MiscFlags = 0u; \_vbd.ByteWidth = sizeof(\_vertices); \_vbd.StructureByteStride = sizeof(Vertex); D3D11\_SUBRESOURCE\_DATA \_vsd = {}; \_vsd.pSysMem = \_vertices; // Create vertex buffer GFX\_THROW\_INFO(\_hr, mp\_device->CreateBuffer(&\_vbd, &\_vsd, &\_vertexBuf) ); // Bind vertex buffer to pipeline const UINT \_stride = sizeof(Vertex); const UINT \_offset = 0u; GFX\_THROW\_INFO\_ONLY(\_v, mp\_context->IASetVertexBuffers( 0u, 1u, \_vertexBuf.GetAddressOf(), &\_stride, &\_offset ) ); // Index buffer description wrl::ComPtr \_indexBuf; D3D11\_BUFFER\_DESC \_ibd = {}; \_ibd.BindFlags = D3D11\_BIND\_INDEX\_BUFFER; \_ibd.Usage = D3D11\_USAGE\_DEFAULT; \_ibd.CPUAccessFlags = 0u; \_ibd.MiscFlags = 0u; \_ibd.ByteWidth = sizeof(\_indices); \_ibd.StructureByteStride = sizeof(unsigned short); D3D11\_SUBRESOURCE\_DATA \_isd = {}; \_isd.pSysMem = \_indices; // Create index buffer GFX\_THROW\_INFO(\_hr, mp\_device->CreateBuffer(&\_ibd, &\_isd, &\_indexBuf) ); // Bind index buffer to pipeline GFX\_THROW\_INFO\_ONLY(\_v, mp\_context->IASetIndexBuffer( \_indexBuf.Get(), DXGI\_FORMAT\_R16\_UINT, 0u ) );
I intend to replace the top with the bottom
if (m\_indexCount == 0) { // temporary, just a build of vectors from local arrays std::vector \_verts; for (int \_i = 0; \_i < 8; \_i += 1) \_verts.push\_back(\_vertices\[\_i\]); std::vector \_inds; for (int \_i = 0; \_i < 36; \_i += 1) \_inds.push\_back(\_indices\[\_i\]); m\_indexCount = \_inds.size(); m\_ibufIndex = AddBind(std::make\_unique(\*this, \_inds)); m\_vbufIndex = Add
-
Thank you for your answer, Graham. I actually do check all the returns, I just squeezed down the code to the minimum. Even I check the data from gdiplus.h, that's where the warning come from. Every D3D11 function call is wrapped into an exception thrower. I already have a straight function where the buffers are created and released every frame that works. I am trying to replace all those creations, step by step, by preloaded buffers, but with no luck. To be clearer, I am trying to sustitute index and vertex buffers creations by the corresponding classes from a function that already draws a rotating cube. The mesh description is the very same. ```
// Vertex buffer description wrl::ComPtr \_vertexBuf; D3D11\_BUFFER\_DESC \_vbd = {}; \_vbd.BindFlags = D3D11\_BIND\_VERTEX\_BUFFER; \_vbd.Usage = D3D11\_USAGE\_DEFAULT; \_vbd.CPUAccessFlags = 0u; \_vbd.MiscFlags = 0u; \_vbd.ByteWidth = sizeof(\_vertices); \_vbd.StructureByteStride = sizeof(Vertex); D3D11\_SUBRESOURCE\_DATA \_vsd = {}; \_vsd.pSysMem = \_vertices; // Create vertex buffer GFX\_THROW\_INFO(\_hr, mp\_device->CreateBuffer(&\_vbd, &\_vsd, &\_vertexBuf) ); // Bind vertex buffer to pipeline const UINT \_stride = sizeof(Vertex); const UINT \_offset = 0u; GFX\_THROW\_INFO\_ONLY(\_v, mp\_context->IASetVertexBuffers( 0u, 1u, \_vertexBuf.GetAddressOf(), &\_stride, &\_offset ) ); // Index buffer description wrl::ComPtr \_indexBuf; D3D11\_BUFFER\_DESC \_ibd = {}; \_ibd.BindFlags = D3D11\_BIND\_INDEX\_BUFFER; \_ibd.Usage = D3D11\_USAGE\_DEFAULT; \_ibd.CPUAccessFlags = 0u; \_ibd.MiscFlags = 0u; \_ibd.ByteWidth = sizeof(\_indices); \_ibd.StructureByteStride = sizeof(unsigned short); D3D11\_SUBRESOURCE\_DATA \_isd = {}; \_isd.pSysMem = \_indices; // Create index buffer GFX\_THROW\_INFO(\_hr, mp\_device->CreateBuffer(&\_ibd, &\_isd, &\_indexBuf) ); // Bind index buffer to pipeline GFX\_THROW\_INFO\_ONLY(\_v, mp\_context->IASetIndexBuffer( \_indexBuf.Get(), DXGI\_FORMAT\_R16\_UINT, 0u ) );
I intend to replace the top with the bottom
if (m\_indexCount == 0) { // temporary, just a build of vectors from local arrays std::vector \_verts; for (int \_i = 0; \_i < 8; \_i += 1) \_verts.push\_back(\_vertices\[\_i\]); std::vector \_inds; for (int \_i = 0; \_i < 36; \_i += 1) \_inds.push\_back(\_indices\[\_i\]); m\_indexCount = \_inds.size(); m\_ibufIndex = AddBind(std::make\_unique(\*this, \_inds)); m\_vbufIndex = Add
Creating the buffers at the start makes sense, but I'm fairly sure you have to assign them to the device context on every frame. This page: Understand the Direct3D 11 rendering pipeline - Win32 apps | Microsoft Docs[^] breaks the process down into setup and a render function that is called on each frame.
-
Creating the buffers at the start makes sense, but I'm fairly sure you have to assign them to the device context on every frame. This page: Understand the Direct3D 11 rendering pipeline - Win32 apps | Microsoft Docs[^] breaks the process down into setup and a render function that is called on each frame.