Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Would you release this or not?

Would you release this or not?

Scheduled Pinned Locked Moved The Lounge
helptutorialquestionannouncementgraphics
37 Posts 13 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L Luc Pattyn

    You don't need a logic analyzer, you don't even need a scope, to get timing specs right. You should get them right by design, not by inspection. You do need the data sheets of the components involved. Taking care of setup and hold times is the first thing you should do, they are a crucial part of the contract you have with the chip vendor. A data sheet is a unilateral contract, there is no way around it. :)

    Luc Pattyn [My Articles] The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.

    H Offline
    H Offline
    honey the codewitch
    wrote on last edited by
    #22

    The problem with that is simply effort. Basically I'd need to know the timings for absolutely everything. It would take me months to write code that should take me days. I think I'll pass.

    Real programmers use butterflies

    L 1 Reply Last reply
    0
    • H honey the codewitch

      The problem with that is simply effort. Basically I'd need to know the timings for absolutely everything. It would take me months to write code that should take me days. I think I'll pass.

      Real programmers use butterflies

      L Offline
      L Offline
      Luc Pattyn
      wrote on last edited by
      #23

      That is overly pessimistic. Systems get designed without knowing "everything". What you do need is a basic approach to the setup-and-hold issue; so you need to make sure A) data transfers don't overlap B) each data transfer consists of three phases: 1. set the data ready 2. issue the clock/latch pulse 3. remove the data (i.e. guarantee the hold spec) These steps must remain in sequence, with non-zero time between them. As electronic setup and hold requirements range in the (tens of) nanoseconds, having one or a few instructions in between normally suffices. How you get that depends on environment and available tooling. If a general purpose driver is present, consider using three separate I/O operations. If a specialized driver is used, it should take care of the details itself. Once you get a solution, use it everywhere. Separation of concerns applies at all levels. PS: Beware of optimizing compilers; low-level code best is collected in a separate file that gets handled with other tools or tool settings. :)

      Luc Pattyn [My Articles] The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.

      H 1 Reply Last reply
      0
      • L Luc Pattyn

        That is overly pessimistic. Systems get designed without knowing "everything". What you do need is a basic approach to the setup-and-hold issue; so you need to make sure A) data transfers don't overlap B) each data transfer consists of three phases: 1. set the data ready 2. issue the clock/latch pulse 3. remove the data (i.e. guarantee the hold spec) These steps must remain in sequence, with non-zero time between them. As electronic setup and hold requirements range in the (tens of) nanoseconds, having one or a few instructions in between normally suffices. How you get that depends on environment and available tooling. If a general purpose driver is present, consider using three separate I/O operations. If a specialized driver is used, it should take care of the details itself. Once you get a solution, use it everywhere. Separation of concerns applies at all levels. PS: Beware of optimizing compilers; low-level code best is collected in a separate file that gets handled with other tools or tool settings. :)

        Luc Pattyn [My Articles] The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.

        H Offline
        H Offline
        honey the codewitch
        wrote on last edited by
        #24

        This is all an excellent argument for using my logic analyzer.

        Real programmers use butterflies

        L 1 Reply Last reply
        0
        • H honey the codewitch

          This is all an excellent argument for using my logic analyzer.

          Real programmers use butterflies

          L Offline
          L Offline
          Luc Pattyn
          wrote on last edited by
          #25

          That does not make any sense to me; it sounds like using a debugger when code does not even compile. BTW: a logic analyzer also has setup and hold requirements!

          Luc Pattyn [My Articles] The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.

          H 1 Reply Last reply
          0
          • L Luc Pattyn

            That does not make any sense to me; it sounds like using a debugger when code does not even compile. BTW: a logic analyzer also has setup and hold requirements!

            Luc Pattyn [My Articles] The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.

            H Offline
            H Offline
            honey the codewitch
            wrote on last edited by
            #26

            Okay. I don't time the SPI in software. It is timed by a controller on the MCU I'm using. No compiler in the world is going to tell me what that hardware is producing. A logic analyzer will. So if I want to make sure the signals don't overlap, I'm looking at the bus output using my salae. Full stop.

            Real programmers use butterflies

            L 1 Reply Last reply
            0
            • H honey the codewitch

              Okay. I don't time the SPI in software. It is timed by a controller on the MCU I'm using. No compiler in the world is going to tell me what that hardware is producing. A logic analyzer will. So if I want to make sure the signals don't overlap, I'm looking at the bus output using my salae. Full stop.

              Real programmers use butterflies

              L Offline
              L Offline
              Luc Pattyn
              wrote on last edited by
              #27

              Displays have their own requirements, no matter what bus or interface is being used. Their functionality typically is microcontroller based, and simple commands take a few microseconds to process; more complex commands (total reset, return home, row clear, ...) may run into a few milliseconds. Obviously you have to take care of that, SPI or any other interface won't do it for you. If you want to debug that with an LA, be my guest. My first approach would be to add some code to either check things by software (assert minimum timespan between commands) or generate a log file; yes I'm aware this by itself may change the timing a bit, however it can tell me where things are insufficient or marginal. :)

              Luc Pattyn [My Articles] The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.

              H 1 Reply Last reply
              0
              • L Luc Pattyn

                Displays have their own requirements, no matter what bus or interface is being used. Their functionality typically is microcontroller based, and simple commands take a few microseconds to process; more complex commands (total reset, return home, row clear, ...) may run into a few milliseconds. Obviously you have to take care of that, SPI or any other interface won't do it for you. If you want to debug that with an LA, be my guest. My first approach would be to add some code to either check things by software (assert minimum timespan between commands) or generate a log file; yes I'm aware this by itself may change the timing a bit, however it can tell me where things are insufficient or marginal. :)

                Luc Pattyn [My Articles] The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.

                H Offline
                H Offline
                honey the codewitch
                wrote on last edited by
                #28

                Luc Pattyn wrote:

                My first approach would be to add some code to either check things by software (assert minimum timespan between commands)

                And which display model and chip should I start with since the same exact thing (including exactly how it fails) happens on literally all of them. ST7789, ILI9341, and SSD1351 alike. So which datasheet do I start with? Since they all fail exactly the same way?

                Real programmers use butterflies

                L 1 Reply Last reply
                0
                • H honey the codewitch

                  Luc Pattyn wrote:

                  My first approach would be to add some code to either check things by software (assert minimum timespan between commands)

                  And which display model and chip should I start with since the same exact thing (including exactly how it fails) happens on literally all of them. ST7789, ILI9341, and SSD1351 alike. So which datasheet do I start with? Since they all fail exactly the same way?

                  Real programmers use butterflies

                  L Offline
                  L Offline
                  Luc Pattyn
                  wrote on last edited by
                  #29

                  If you want all of them to work properly, it does not really matter, you would have to solve all problems anyway. But then ST7789S and ILI9341 look very similar, while SSD1351 is clearly different. Assuming nothing else is a factor (e.g. all hardware looks equally reliable) I would start with ST7789S or ILI9341, whichever you get the most recent datasheet for. Or the most intelligible one, as not all Asian-to-English translations are equally successful. :)

                  Luc Pattyn [My Articles] The Windows 11 "taskbar" is disgusting. It should be at the left of the screen, with real icons, with text, progress, etc. They downgraded my developer PC to a bloody iPhone.

                  1 Reply Last reply
                  0
                  • J Jo_vb net

                    I think, You should wait until you can fix the SPI problems and save other people's patience.

                    B Offline
                    B Offline
                    bmarstella
                    wrote on last edited by
                    #30

                    Not necessarily a common thing, but I've grabbed at least 4 different projects over the years that were partially working. Typically I've fixed only the components that were problematic for me but it saved me time to stand on the shoulders of others rather than having to build everything from scratch.

                    1 Reply Last reply
                    0
                    • H honey the codewitch

                      Sorry, I'm not sure where to put this question, but I'd like opinions. I have a graphics library, primarily targeting IoT devices. It has drivers for several displays. The drivers could be faster, tbh. All of these drivers communicate over SPI (well there are I2C ones too but forget those for now). For those of you that don't know, SPI is a serial wire protocol that came about in like the 1980s or something. Your SD cards are SPI "slave devices" I wrote a parallel driver for displays that support it. That means 8 data lines instead of one. It's not 8 times as fast, for hardware reasons, but it's much faster. Anyway, I also refactored my new driver code so that it's layered, separating the concerns of driving the bus (either parallel or spi) and operating the specific display chip (like an ILI9341 or an ST7789) The problem is this: I've optimized the SPI code in this new version and it only works on certain boards. There are timing issues. It's probably too fast, but I've had no luck getting it to work reliably. It only displays part of the tests, and then it freezes on most displays. As far as too fast, maybe the CS line control is too fast - it's not simply the SPI rate. I've tried changing that. For example, there's something called VDI rail that somehow needs more time to register a line change. I don't know more about it, it was just from sifting through Bodmer's code comments in TFT_eSPI, which I've been using as a guide. So to recap, in the old code, it's all unoptimized SPI, but it works. In the new code, there's also parallel support, and the unoptimized SPI works but the optimized SPI does not. I *could* release it disabling the SPI optimizations, and keeping the parallel support and refactored code, but this release would have breaking changes - people would need to change existing code to use it. I have no idea how long it will take me to fix the SPI. It vexes me. My question is, should I release it, or should I wait until I can fix the SPI problems especially since it's a change that is disruptive to people's codebases?

                      Real programmers use butterflies

                      C Offline
                      C Offline
                      Cpichols
                      wrote on last edited by
                      #31

                      Since you're asking for opinion: You seem to be a very creative person, so I'm guessing that you have ideas bubbling away on the back burner of your mind all the time. Back burner this one. Let it rest from up-front work while you work on other projects, and the answer might just hit you along the way.

                      1 Reply Last reply
                      0
                      • H honey the codewitch

                        Sorry, I'm not sure where to put this question, but I'd like opinions. I have a graphics library, primarily targeting IoT devices. It has drivers for several displays. The drivers could be faster, tbh. All of these drivers communicate over SPI (well there are I2C ones too but forget those for now). For those of you that don't know, SPI is a serial wire protocol that came about in like the 1980s or something. Your SD cards are SPI "slave devices" I wrote a parallel driver for displays that support it. That means 8 data lines instead of one. It's not 8 times as fast, for hardware reasons, but it's much faster. Anyway, I also refactored my new driver code so that it's layered, separating the concerns of driving the bus (either parallel or spi) and operating the specific display chip (like an ILI9341 or an ST7789) The problem is this: I've optimized the SPI code in this new version and it only works on certain boards. There are timing issues. It's probably too fast, but I've had no luck getting it to work reliably. It only displays part of the tests, and then it freezes on most displays. As far as too fast, maybe the CS line control is too fast - it's not simply the SPI rate. I've tried changing that. For example, there's something called VDI rail that somehow needs more time to register a line change. I don't know more about it, it was just from sifting through Bodmer's code comments in TFT_eSPI, which I've been using as a guide. So to recap, in the old code, it's all unoptimized SPI, but it works. In the new code, there's also parallel support, and the unoptimized SPI works but the optimized SPI does not. I *could* release it disabling the SPI optimizations, and keeping the parallel support and refactored code, but this release would have breaking changes - people would need to change existing code to use it. I have no idea how long it will take me to fix the SPI. It vexes me. My question is, should I release it, or should I wait until I can fix the SPI problems especially since it's a change that is disruptive to people's codebases?

                        Real programmers use butterflies

                        M Offline
                        M Offline
                        Member 13932523
                        wrote on last edited by
                        #32

                        1. Optimised code that does not work is not optimised code.. it's broken code. 2. Get on and fix it - use a good LA to look at timing and data value differences on the bus of when it works/compared to not working. 3. Do not release known broken code. 4. Consider people asking to use DMA....

                        H 1 Reply Last reply
                        0
                        • M Member 13932523

                          1. Optimised code that does not work is not optimised code.. it's broken code. 2. Get on and fix it - use a good LA to look at timing and data value differences on the bus of when it works/compared to not working. 3. Do not release known broken code. 4. Consider people asking to use DMA....

                          H Offline
                          H Offline
                          honey the codewitch
                          wrote on last edited by
                          #33

                          I don't like it when people are pedantic. You know what I mean in #1. The DMA is actually the only part that's working consistently.

                          Real programmers use butterflies

                          M 1 Reply Last reply
                          0
                          • H honey the codewitch

                            I don't like it when people are pedantic. You know what I mean in #1. The DMA is actually the only part that's working consistently.

                            Real programmers use butterflies

                            M Offline
                            M Offline
                            Member 13932523
                            wrote on last edited by
                            #34

                            Yes I do know what you mean. But the point is consumers of your library will not know (or really care?) about the history of the code, they just want to use code that works and is good quality. If perhaps you change the wording and asked if consumers would like un-optimised code or broken code, neither sound all that appealing. :-D

                            H 1 Reply Last reply
                            0
                            • M Member 13932523

                              Yes I do know what you mean. But the point is consumers of your library will not know (or really care?) about the history of the code, they just want to use code that works and is good quality. If perhaps you change the wording and asked if consumers would like un-optimised code or broken code, neither sound all that appealing. :-D

                              H Offline
                              H Offline
                              honey the codewitch
                              wrote on last edited by
                              #35

                              As I said in my OP though maybe I wasn't clear, I wouldn't be releasing code that didn't work. I'd simply dial back the optimizations until they weren't there anymore, leaving it functioning the same way the existing released code is (at least for SPI) I should add, I've already decided not to release it, so this exchange is moot, outside simply the hypothetical. Just FYI.

                              Real programmers use butterflies

                              M 1 Reply Last reply
                              0
                              • H honey the codewitch

                                As I said in my OP though maybe I wasn't clear, I wouldn't be releasing code that didn't work. I'd simply dial back the optimizations until they weren't there anymore, leaving it functioning the same way the existing released code is (at least for SPI) I should add, I've already decided not to release it, so this exchange is moot, outside simply the hypothetical. Just FYI.

                                Real programmers use butterflies

                                M Offline
                                M Offline
                                Member 13932523
                                wrote on last edited by
                                #36

                                I see :cool:

                                1 Reply Last reply
                                0
                                • H honey the codewitch

                                  Sorry, I'm not sure where to put this question, but I'd like opinions. I have a graphics library, primarily targeting IoT devices. It has drivers for several displays. The drivers could be faster, tbh. All of these drivers communicate over SPI (well there are I2C ones too but forget those for now). For those of you that don't know, SPI is a serial wire protocol that came about in like the 1980s or something. Your SD cards are SPI "slave devices" I wrote a parallel driver for displays that support it. That means 8 data lines instead of one. It's not 8 times as fast, for hardware reasons, but it's much faster. Anyway, I also refactored my new driver code so that it's layered, separating the concerns of driving the bus (either parallel or spi) and operating the specific display chip (like an ILI9341 or an ST7789) The problem is this: I've optimized the SPI code in this new version and it only works on certain boards. There are timing issues. It's probably too fast, but I've had no luck getting it to work reliably. It only displays part of the tests, and then it freezes on most displays. As far as too fast, maybe the CS line control is too fast - it's not simply the SPI rate. I've tried changing that. For example, there's something called VDI rail that somehow needs more time to register a line change. I don't know more about it, it was just from sifting through Bodmer's code comments in TFT_eSPI, which I've been using as a guide. So to recap, in the old code, it's all unoptimized SPI, but it works. In the new code, there's also parallel support, and the unoptimized SPI works but the optimized SPI does not. I *could* release it disabling the SPI optimizations, and keeping the parallel support and refactored code, but this release would have breaking changes - people would need to change existing code to use it. I have no idea how long it will take me to fix the SPI. It vexes me. My question is, should I release it, or should I wait until I can fix the SPI problems especially since it's a change that is disruptive to people's codebases?

                                  Real programmers use butterflies

                                  M Offline
                                  M Offline
                                  Matt McGuire
                                  wrote on last edited by
                                  #37

                                  it's been a couple years since I last wrote a SPI -> display setup, but if I'm remembering correct there is a minimum time threshold for the slave device to register the tick. Usually the pdf for the display chip should have the min and max values. But I'm likely saying something you already know. have you tried putting in some empty wait commands between processing to slow it down a few cycles and see if the displays not working correctly start working again? if they do, can you make a make a couple variables when initializing the code like: _DSP_FAST =0, _DSP_MED = 16, _DSP_SLOW = 32 and tie those into wait loops? whether to release it or not question: I wouldn't unless there is a clear advantage for your optimized code like a solid 10% gain (or more) in clock ticks that can be shed over to other processing tasks, but then you should have a disclaimer for what displays work good, and what ones you know don't. gosh I miss working on that stuff. good luck on what ever direction you are going. :)

                                  1 Reply Last reply
                                  0
                                  Reply
                                  • Reply as topic
                                  Log in to reply
                                  • Oldest to Newest
                                  • Newest to Oldest
                                  • Most Votes


                                  • Login

                                  • Don't have an account? Register

                                  • Login or register to search.
                                  • First post
                                    Last post
                                  0
                                  • Categories
                                  • Recent
                                  • Tags
                                  • Popular
                                  • World
                                  • Users
                                  • Groups