Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Computer Graphics Convolutions

Computer Graphics Convolutions

Scheduled Pinned Locked Moved The Lounge
graphicsc++game-devhostingcloud
19 Posts 12 Posters 2 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • B Offline
    B Offline
    Brian L Hughes
    wrote on last edited by
    #1

    Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

    S B P J J 11 Replies Last reply
    0
    • B Brian L Hughes

      Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

      S Offline
      S Offline
      Shao Voon Wong
      wrote on last edited by
      #2

      If you want to access the pixels to draw your line, you're better off sticking with GDI/GDI+. With Direct2D DeviceContext (not RenderTarget), you can retrieve the bitmap from the GPU and draw your line on the CPU and send it back to the GPU but it slows down the drawing operation of Direct2D. DeviceContext is a class that inherits from RenderTarget class.

      1 Reply Last reply
      0
      • B Brian L Hughes

        Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

        S Offline
        S Offline
        Shao Voon Wong
        wrote on last edited by
        #3

        You can draw a line with DirectX or OpenGL with each pixel made up of 2 triangles. I draw the sine curve on the GPU on this Youtube(below). I send a bunch of triangles that form a straight horizontal line. And the DirectX shader transforms the straight line into a sinewave based on elapsed time. With GPU, the thinking must be shifted from pixels to triangles involving shaders which is a steep learning curve and lots of coding involved. [Mandy Frenzy With All Photo Effects Shown - YouTube](https://www.youtube.com/watch?v=od1Z9nb5vwQ)

        1 Reply Last reply
        0
        • B Brian L Hughes

          Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

          B Offline
          B Offline
          BillWoodruff
          wrote on last edited by
          #4

          Brian L Hughes wrote:

          one day I'll have to post my entire RenderTarget .h and .cpp class files to the board

          maybe you can revice this forum: [^]

          «The mind is not a vessel to be filled but a fire to be kindled» Plutarch

          1 Reply Last reply
          0
          • B Brian L Hughes

            Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

            P Offline
            P Offline
            Phil J Pearson
            wrote on last edited by
            #5

            I used to do such raw coding and also had a lot of fun doing it - even when I got to be doing it as part of my day job. What I can't for the life of me now remember is how I ever found out even how to begin, at a time when there was no Internet! (Yes, kids - there was such a time, long long ago.)

            Phil


            The opinions expressed in this post are not necessarily those of the author, especially if you find them impolite, inaccurate or inflammatory.

            L B 2 Replies Last reply
            0
            • P Phil J Pearson

              I used to do such raw coding and also had a lot of fun doing it - even when I got to be doing it as part of my day job. What I can't for the life of me now remember is how I ever found out even how to begin, at a time when there was no Internet! (Yes, kids - there was such a time, long long ago.)

              Phil


              The opinions expressed in this post are not necessarily those of the author, especially if you find them impolite, inaccurate or inflammatory.

              L Offline
              L Offline
              Lost User
              wrote on last edited by
              #6

              Phil J Pearson wrote:

              there was such a time, long long ago

              When we had no choice but to study the documentation, or figure things out for ourselves.

              1 Reply Last reply
              0
              • B Brian L Hughes

                Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

                J Offline
                J Offline
                jschell
                wrote on last edited by
                #7

                Brian L Hughes wrote:

                Now today I still have fun coding but I have to admit that things are a bit daunting

                That is what happens when you turn on the gore flag and the reflections in water flag. You can of course go back to those good old days. Get the actual Atari ST and use it or use an emulator Hatari[^]

                1 Reply Last reply
                0
                • B Brian L Hughes

                  Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

                  J Offline
                  J Offline
                  Jeremy Falcon
                  wrote on last edited by
                  #8

                  I feel your pain... using COM in languages like C for instance (my BFF language)... is um... well... interesting. C++ is at least a little friendlier in that regards at least, but sometimes that bare metal feeling in C is just what good times are made of. Some folks prefer DX, but if you want that old skool feeling then IMO OpenGL (OGL) will give you that. It feels a lot like GDI programming. If you want to draw a line for instance..

                  glBegin(GL_LINES);
                  glVertex2f(x1, y1); // start point
                  glVertex2f(x2, y2); // end point
                  glEnd();

                  Boom. Done. Only thing that will make that line not look like a line is your transformation/projection matrix. But if you use straight up orthogonal projection, it's the same thing as the olden day concept as OGL isn't a full-blown engine at all. It's really just a graphics library to let you talk directly to your hardware so crap draws quicker. That's all. Nothing more. Nothing less.

                  Jeremy Falcon

                  B 1 Reply Last reply
                  0
                  • B Brian L Hughes

                    Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

                    L Offline
                    L Offline
                    Lost User
                    wrote on last edited by
                    #9

                    I'm doing all that within the "confines" of .NET: reading and writing pixel buffers and writeable bitmaps at run time. UWP has 2 UI related treads: "the" UI thread and a "composition" thread for rendering elements that don't require layout changes for better performance. The things you want are there; it's just that the hard work's been done. Your "moving line", today, would require a Canvas element, a Line element (just a length, stroke and color), and a Storyboard element to animate it: about 10 lines of XAML ... and you could rotate it while you were at it with an extra line or 2. There is still joy to be found out there.

                    "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

                    B 1 Reply Last reply
                    0
                    • J Jeremy Falcon

                      I feel your pain... using COM in languages like C for instance (my BFF language)... is um... well... interesting. C++ is at least a little friendlier in that regards at least, but sometimes that bare metal feeling in C is just what good times are made of. Some folks prefer DX, but if you want that old skool feeling then IMO OpenGL (OGL) will give you that. It feels a lot like GDI programming. If you want to draw a line for instance..

                      glBegin(GL_LINES);
                      glVertex2f(x1, y1); // start point
                      glVertex2f(x2, y2); // end point
                      glEnd();

                      Boom. Done. Only thing that will make that line not look like a line is your transformation/projection matrix. But if you use straight up orthogonal projection, it's the same thing as the olden day concept as OGL isn't a full-blown engine at all. It's really just a graphics library to let you talk directly to your hardware so crap draws quicker. That's all. Nothing more. Nothing less.

                      Jeremy Falcon

                      B Offline
                      B Offline
                      Brian L Hughes
                      wrote on last edited by
                      #10

                      I went through a OpenGL 3D tutorial start to finish. I haven't applied myself to try to adapt my current DX library to it. My library consists of handmade controls and dialog box panels to house them as well as the main surface to display a game. I was going to try make a 2D game out of it all, but it got stalled when I couldn't figure out why characters were glitching. I have text editor box, listview box, drop downs, RGB color picker, buttons, labels, BMP display, check and radio boxes. All of them rendered by hand. I tried my best to enable interface scaling on everything. Got it all working. Even made a match 3 game like Bejeweled where you slide blocks around and they disappear and the other blocks slide down to fill their place. But the printed text sometimes appears garbled on the screen the edges look funny. I could gut the DX, it's all in one c++ class and replace it with OpenGL, it'd take some time but it's doable.

                      J 1 Reply Last reply
                      0
                      • L Lost User

                        I'm doing all that within the "confines" of .NET: reading and writing pixel buffers and writeable bitmaps at run time. UWP has 2 UI related treads: "the" UI thread and a "composition" thread for rendering elements that don't require layout changes for better performance. The things you want are there; it's just that the hard work's been done. Your "moving line", today, would require a Canvas element, a Line element (just a length, stroke and color), and a Storyboard element to animate it: about 10 lines of XAML ... and you could rotate it while you were at it with an extra line or 2. There is still joy to be found out there.

                        "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

                        B Offline
                        B Offline
                        Brian L Hughes
                        wrote on last edited by
                        #11

                        I use C# a lot, quite a few non-graphical utilities. I like how easy it is to code classes and methods for complicated stuff without the hassle of dealing with c++'s single pass compiler. For the life me I can't understand why Microsoft won't wire in, expose direct X to C#. I actually started my DirectX experience with the defunct SharpDX and decided to move to c++ since it was no longer supported. It's kind of weird, you think they'd want to expose DX to help move along it's takeover of c++. I like using c++ as well, but the single pass compiler restraint is a pain. I was thinking of trying Unity but I think that would box me in even further.

                        K 1 Reply Last reply
                        0
                        • P Phil J Pearson

                          I used to do such raw coding and also had a lot of fun doing it - even when I got to be doing it as part of my day job. What I can't for the life of me now remember is how I ever found out even how to begin, at a time when there was no Internet! (Yes, kids - there was such a time, long long ago.)

                          Phil


                          The opinions expressed in this post are not necessarily those of the author, especially if you find them impolite, inaccurate or inflammatory.

                          B Offline
                          B Offline
                          Brian L Hughes
                          wrote on last edited by
                          #12

                          Books, I used to shop at Barnes and Noble all the time. Atari ST Books[^] I had several of the books shown in the link.

                          1 Reply Last reply
                          0
                          • B Brian L Hughes

                            Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

                            R Offline
                            R Offline
                            rtischer8277
                            wrote on last edited by
                            #13

                            I'll stick with C++ MFC and GDI thank you. MS it's finally doing a great job on the docs. It only took them 40 years. Churchill was right: you can always count on the Americans to do the right thing...after they have tried everything else.

                            B 1 Reply Last reply
                            0
                            • B Brian L Hughes

                              Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

                              G Offline
                              G Offline
                              Gary Wheeler
                              wrote on last edited by
                              #14

                              Sounds like you and I are of the same vintage. My earliest graphics programming was in 1982 at college. Our machines were Z-80's with 64K of RAM. The graphics card displayed 256x256 in 8 colors on a $10,000 color monitor. Graphics memory was bank-switched in the lower 48K of the address range. You accessed it by calling a function in the upper 16K which switched out the main memory, switched in the graphics, did the drawing primitive (pixels, lines, and flood-fills), and then switched main memory back before returning. Occasionally the bank switching would not work, so you would get to watch your code executing on the screen instead of the image you were drawing. I took both computer graphics classes and then did an independent study project. I implemented the Fuchs-etc. Binary Space Partitioning algorithm for 3D hidden-surface removal. This is the same algorithm used in DOOM. During this time I spent more than one 40-hour day debugging. Good times.

                              Software Zen: delete this;

                              1 Reply Last reply
                              0
                              • B Brian L Hughes

                                I use C# a lot, quite a few non-graphical utilities. I like how easy it is to code classes and methods for complicated stuff without the hassle of dealing with c++'s single pass compiler. For the life me I can't understand why Microsoft won't wire in, expose direct X to C#. I actually started my DirectX experience with the defunct SharpDX and decided to move to c++ since it was no longer supported. It's kind of weird, you think they'd want to expose DX to help move along it's takeover of c++. I like using c++ as well, but the single pass compiler restraint is a pain. I was thinking of trying Unity but I think that would box me in even further.

                                K Offline
                                K Offline
                                Kate X257
                                wrote on last edited by
                                #15

                                C# isn't a good fit, that's all. To work with hardware effectively, you need pointers and the ability to compile down to highly optimised byte code. To work with apps effectively, you need a good IL, a garbage collector, and a strict ban on pointers. Rust and DirectX would make more sense to me. (but I have to admit that I don't need hardware access very often, so what do I know?)

                                1 Reply Last reply
                                0
                                • B Brian L Hughes

                                  I went through a OpenGL 3D tutorial start to finish. I haven't applied myself to try to adapt my current DX library to it. My library consists of handmade controls and dialog box panels to house them as well as the main surface to display a game. I was going to try make a 2D game out of it all, but it got stalled when I couldn't figure out why characters were glitching. I have text editor box, listview box, drop downs, RGB color picker, buttons, labels, BMP display, check and radio boxes. All of them rendered by hand. I tried my best to enable interface scaling on everything. Got it all working. Even made a match 3 game like Bejeweled where you slide blocks around and they disappear and the other blocks slide down to fill their place. But the printed text sometimes appears garbled on the screen the edges look funny. I could gut the DX, it's all in one c++ class and replace it with OpenGL, it'd take some time but it's doable.

                                  J Offline
                                  J Offline
                                  Jeremy Falcon
                                  wrote on last edited by
                                  #16

                                  Brian L Hughes wrote:

                                  But the printed text sometimes appears garbled on the screen the edges look funny.

                                  Just to make sure I follow, you mean the edges of the the text? What type of front are you loading (if any) or is this just bit-mapped? And does it it have anti-aliasing? These days anti-aliasing is all but required for text look like right on current gen LCD monitors. If you got a CRT laying around, could always view your game on that just to see if that's the issue. The less crisp nature of CRT display will do the aliasing for you.

                                  Jeremy Falcon

                                  1 Reply Last reply
                                  0
                                  • B Brian L Hughes

                                    Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

                                    P Offline
                                    P Offline
                                    Peter Shaw
                                    wrote on last edited by
                                    #17

                                    this is EXACTLY why I still enjoy programming stuff using SDL. Not only that, but since SDL can also compile to web targets using Emscripten, it means I can still write all these strange little 2D demos and stuff like I did back in my demo days, and it compiles for Windows/Web/Linux and Mac all from the same code base. Add into that, that you can also access SDL from C# and .NET these days. Makes for a very happy dev..... well when I get a break from boring as hell LOB apps (AKA massive databases with a new coat of paint on) projects anyway .

                                    1 Reply Last reply
                                    0
                                    • B Brian L Hughes

                                      Years ago I bought an Atari ST and had fun programming in Assembler making graphics utilities and stuff. The entire graphics interface consisted of single byte address of your buffer in memory that was passed to the OS. Coding was like the freedom of running naked on a tropical island. If you wanted to draw a line across the screen based on one of the 3 modes you'd have to figure out which exact byte and bit of the buffer to turn on and off to make it happen, it was challenging and fun. It was all up to you. Now today I still have fun coding but I have to admit that things are a bit daunting. To draw that same line using an interface such as DirectX 2D RenderTarget you just call the DrawLine function. Ok, that's easy enough but it takes some serious research just to get the RenderTarget ComPtr initialized. Plus we are stuck with the methods of RenderTarget. Gone are the days where every pixel of the screen was at your code's disposal. DirectX is so complicated it's nearly impossible to figure out every nook and cranny. I'll probably never know why some of my printed characters in DirectX end up a bit garbled despite hours and hours of "googling" for the answer. Of course if I want that same freedom of yesteryear I can write directly to a bitmap's buffer and bitblt it to a DC, just not as fast as directX. Yes I realize that every graphics card is different and without the drivers I'd be stuck handling graphics modes for every known graphics card on the market. I don't know, it seems like the finer details DirectX is kind of hidden behind wall of complexity that is hard to breach. If I could figure out how to make a line march across a screen by turning bits on and off in a buffer 35 years ago one would think that I'd be able to code up and down the 3d pipe line and create complex shaders and stuff, but no, it's all just too complicated to even want to figure out anymore. The www seriously lacks examples of all directX features. I guess I'm just a cranky old man now complaining about how things aren't as easy as it used to be. It's like today's machine isn't really a computer anymore, it's more a service with complicated contracts and laws that you have to sign off on to use. Soon I suppose it will just be a monitor connected to the cloud. Yeah, I'll admit alpha channels and transparency are pretty cool. I suppose one day I'll have to post my entire RenderTarget .h and .cpp class files to the board and ask why my characters sometimes appear garbled, but not today. I think for now I'll just rem

                                      J Offline
                                      J Offline
                                      jochance
                                      wrote on last edited by
                                      #18

                                      James Randall[^] Blazor Wolfenstein3D. We're on the same page with a bunch of what you had to say. I've been working with this as a codebase to do other things. What you will find of interest is how it exposes a "screen buffer" where you are writing to an array of uints all the pixel color values. This is done in C#'s unsafe context with a pinned bit of memory and I believe the typical frame flip technique. This is inside of a Blazor app where the C# is getting turned into webassembly and running client side.

                                      1 Reply Last reply
                                      0
                                      • R rtischer8277

                                        I'll stick with C++ MFC and GDI thank you. MS it's finally doing a great job on the docs. It only took them 40 years. Churchill was right: you can always count on the Americans to do the right thing...after they have tried everything else.

                                        B Offline
                                        B Offline
                                        Brian L Hughes
                                        wrote on last edited by
                                        #19

                                        I made my own MFC, without CView

                                        1 Reply Last reply
                                        0
                                        Reply
                                        • Reply as topic
                                        Log in to reply
                                        • Oldest to Newest
                                        • Newest to Oldest
                                        • Most Votes


                                        • Login

                                        • Don't have an account? Register

                                        • Login or register to search.
                                        • First post
                                          Last post
                                        0
                                        • Categories
                                        • Recent
                                        • Tags
                                        • Popular
                                        • World
                                        • Users
                                        • Groups