Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. How did they do it?

How did they do it?

Scheduled Pinned Locked Moved The Lounge
csharpiotperformancehelpquestion
27 Posts 11 Posters 1 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • H honey the codewitch

    I'm working on complicated MIDI wizardry for IoT gadgets, so I can make MIDI "smart" pedals and controllers and such. Playing a multitrack MIDI file without reading it into memory is a bear. I have just not been able to get this code right. The trouble is each midi event has a delta attached to it that is the offset in "MIDI ticks"* from the previous event. *A midi tick is a fixed time duration based on the tempo and timebase. With a multitrack midi file, each track has its own sequence of events and the deltas are all relative to that track. However, in order to play them, you must merge all the tracks into one event stream, adjusting the deltas. The actual adjusting of the deltas isn't so bad, but the logic to figure out when to pull from what track - I'm not even sure I have it right yet, because my code has other issues. My point in all this, is MIDI is early 1980s protocol and multitrack midi isn't exactly brand spanking new. Sequencers with scant amounts of RAM were doing this. I feel like in so many ways MIDI was designed to make it possible to do things on little devices without much RAM. But this particular operation - in C# I just merged the tracks in memory before I played them. I can't afford the RAM or the CPU to do that here. I have to stream everything. And I've convinced myself I'm overcomplicating things. I hate when I do that - it means I have tunnel vision, and/or am missing something big and important. I don't like knowing that I don't know something I need to know, you know? It bugs me, like a song that's stuck in my head I can't remember the entire hook to.

    To err is human. Fortune favors the monsters.

    O Offline
    O Offline
    ot_ik_
    wrote on last edited by
    #11

    Long time not working with MIDI files. Aren't you mixing track and channels? Why would you play multiple track at once?

    1 Reply Last reply
    0
    • H honey the codewitch

      How do I know which track to pull an event from next? That's where it gets weird.

      To err is human. Fortune favors the monsters.

      D Offline
      D Offline
      Dougy83
      wrote on last edited by
      #12

      My last response is suspected of being spam, probably because it contains code, and programming questions are forboden in the lounge. Can you provide a link to your question in the programming section? The code is very simple.

      1 Reply Last reply
      0
      • H honey the codewitch

        How do I know which track to pull an event from next? That's where it gets weird.

        To err is human. Fortune favors the monsters.

        D Offline
        D Offline
        Dougy83
        wrote on last edited by
        #13

        From whichever tracks have the next time equal to the minimum-next-time of all tracks:

        void playNextNotes()
        {
        int nextTime = MAX_INTVAL;
        for (auto &track : tracks)
        {
        if (track.getNextTimestamp() < nextTime)
        nextTime = track.getNextTimestamp();
        }

        waitUntil(nextTime);
        
        for (auto &track : tracks)
        {
            if (track.getNextTimestamp() == nextTime)
            {
                auto event = track.getNextEvent();
                midi.playEvent(event);
            }
        }
        

        }

        H 1 Reply Last reply
        0
        • H honey the codewitch

          I'm working on complicated MIDI wizardry for IoT gadgets, so I can make MIDI "smart" pedals and controllers and such. Playing a multitrack MIDI file without reading it into memory is a bear. I have just not been able to get this code right. The trouble is each midi event has a delta attached to it that is the offset in "MIDI ticks"* from the previous event. *A midi tick is a fixed time duration based on the tempo and timebase. With a multitrack midi file, each track has its own sequence of events and the deltas are all relative to that track. However, in order to play them, you must merge all the tracks into one event stream, adjusting the deltas. The actual adjusting of the deltas isn't so bad, but the logic to figure out when to pull from what track - I'm not even sure I have it right yet, because my code has other issues. My point in all this, is MIDI is early 1980s protocol and multitrack midi isn't exactly brand spanking new. Sequencers with scant amounts of RAM were doing this. I feel like in so many ways MIDI was designed to make it possible to do things on little devices without much RAM. But this particular operation - in C# I just merged the tracks in memory before I played them. I can't afford the RAM or the CPU to do that here. I have to stream everything. And I've convinced myself I'm overcomplicating things. I hate when I do that - it means I have tunnel vision, and/or am missing something big and important. I don't like knowing that I don't know something I need to know, you know? It bugs me, like a song that's stuck in my head I can't remember the entire hook to.

          To err is human. Fortune favors the monsters.

          C Offline
          C Offline
          Cpichols
          wrote on last edited by
          #14

          I know this feel. The problem is always present in your mind like an earworm. It invades your dreams and you can't escape it even though that is what it will take to gain a fresh perspective on it. Have you tried any of the meditation apps? Maybe you could attend a yoga class? Sometimes writing down or talking out everything you know about the issue can help offload some of the brain activity. There is a solution. Try to be settled by that.

          H 1 Reply Last reply
          0
          • D Dougy83

            From whichever tracks have the next time equal to the minimum-next-time of all tracks:

            void playNextNotes()
            {
            int nextTime = MAX_INTVAL;
            for (auto &track : tracks)
            {
            if (track.getNextTimestamp() < nextTime)
            nextTime = track.getNextTimestamp();
            }

            waitUntil(nextTime);
            
            for (auto &track : tracks)
            {
                if (track.getNextTimestamp() == nextTime)
                {
                    auto event = track.getNextEvent();
                    midi.playEvent(event);
                }
            }
            

            }

            H Offline
            H Offline
            honey the codewitch
            wrote on last edited by
            #15

            I got it all working last night. The trick was in implementing your "getNextEvent()" method correctly (I don't call mine that, but same-o same-o)

            To err is human. Fortune favors the monsters.

            D 1 Reply Last reply
            0
            • H honey the codewitch

              How do I know which track to pull an event from next? That's where it gets weird.

              To err is human. Fortune favors the monsters.

              D Offline
              D Offline
              Dougy83
              wrote on last edited by
              #16

              The class code is here: honey code - My Paste Text[^]

              1 Reply Last reply
              0
              • W Wizard of Sleeves

                From what I recall, having done some MIDI stuff in the days before time, on a 1 MHz 8 bit CPU with 4k RAM, is that the data stream was at 31.25 kbaud. [insert match equation here] Which was one tick every 10 milliseconds; all 16 channels combined. So all I had to do was was to preprocess everything in less than 10 ms.

                Nothing succeeds like a budgie without teeth. To err is human, to arr is pirate.

                H Offline
                H Offline
                honey the codewitch
                wrote on last edited by
                #17

                Yeah, it wasn't really the speed that was my problem. It was the difficulty of streaming midi file tracks while merging them without loading more than I absolutely had to in RAM at once. I got it working. It only keeps N messages in memory at a time, where N is the number of tracks. That's about as good as it gets I think.

                To err is human. Fortune favors the monsters.

                1 Reply Last reply
                0
                • H honey the codewitch

                  I got it all working last night. The trick was in implementing your "getNextEvent()" method correctly (I don't call mine that, but same-o same-o)

                  To err is human. Fortune favors the monsters.

                  D Offline
                  D Offline
                  Dougy83
                  wrote on last edited by
                  #18

                  All good. Every problem is difficult, until it's not :laugh: .

                  1 Reply Last reply
                  0
                  • C Cpichols

                    I know this feel. The problem is always present in your mind like an earworm. It invades your dreams and you can't escape it even though that is what it will take to gain a fresh perspective on it. Have you tried any of the meditation apps? Maybe you could attend a yoga class? Sometimes writing down or talking out everything you know about the issue can help offload some of the brain activity. There is a solution. Try to be settled by that.

                    H Offline
                    H Offline
                    honey the codewitch
                    wrote on last edited by
                    #19

                    I found the solution. It couldn't hide from me forever. :)

                    To err is human. Fortune favors the monsters.

                    1 Reply Last reply
                    0
                    • H honey the codewitch

                      I'm working on complicated MIDI wizardry for IoT gadgets, so I can make MIDI "smart" pedals and controllers and such. Playing a multitrack MIDI file without reading it into memory is a bear. I have just not been able to get this code right. The trouble is each midi event has a delta attached to it that is the offset in "MIDI ticks"* from the previous event. *A midi tick is a fixed time duration based on the tempo and timebase. With a multitrack midi file, each track has its own sequence of events and the deltas are all relative to that track. However, in order to play them, you must merge all the tracks into one event stream, adjusting the deltas. The actual adjusting of the deltas isn't so bad, but the logic to figure out when to pull from what track - I'm not even sure I have it right yet, because my code has other issues. My point in all this, is MIDI is early 1980s protocol and multitrack midi isn't exactly brand spanking new. Sequencers with scant amounts of RAM were doing this. I feel like in so many ways MIDI was designed to make it possible to do things on little devices without much RAM. But this particular operation - in C# I just merged the tracks in memory before I played them. I can't afford the RAM or the CPU to do that here. I have to stream everything. And I've convinced myself I'm overcomplicating things. I hate when I do that - it means I have tunnel vision, and/or am missing something big and important. I don't like knowing that I don't know something I need to know, you know? It bugs me, like a song that's stuck in my head I can't remember the entire hook to.

                      To err is human. Fortune favors the monsters.

                      G Offline
                      G Offline
                      Gary Wheeler
                      wrote on last edited by
                      #20

                      For a moment forget about your environment and try to think like the original developers. You've got very limited RAM, and slightly-less limited code space. This means your code has to be clever. It also implies you don't necessarily have to handle the arbitrary, general case where all possibilities are handled regardless of what seems to be allowed by the parameters. As an example, suppose you have a signed 16-bit value to handle. Does the usage really need to allow for negative values? What about zero (0)? What's the actual, practical range for the value? Figuring out the actual, implicit (and undocumented) constraints can help figure out a practical algorithm.

                      Software Zen: delete this;

                      H 1 Reply Last reply
                      0
                      • G Gary Wheeler

                        For a moment forget about your environment and try to think like the original developers. You've got very limited RAM, and slightly-less limited code space. This means your code has to be clever. It also implies you don't necessarily have to handle the arbitrary, general case where all possibilities are handled regardless of what seems to be allowed by the parameters. As an example, suppose you have a signed 16-bit value to handle. Does the usage really need to allow for negative values? What about zero (0)? What's the actual, practical range for the value? Figuring out the actual, implicit (and undocumented) constraints can help figure out a practical algorithm.

                        Software Zen: delete this;

                        H Offline
                        H Offline
                        honey the codewitch
                        wrote on last edited by
                        #21

                        I totally agree with this, and when I originally planned this I was writing clever code. :) I use signed values for most things because midi is largely a 7-bit protocol. that way if I accidentally set the sign bit the number jumps out at me as negative. In the end I solved it. It took me moving my code to a real PC so I could fire up a debugger. It was just too complicated to work through it without one.

                        To err is human. Fortune favors the monsters.

                        G 1 Reply Last reply
                        0
                        • H honey the codewitch

                          I totally agree with this, and when I originally planned this I was writing clever code. :) I use signed values for most things because midi is largely a 7-bit protocol. that way if I accidentally set the sign bit the number jumps out at me as negative. In the end I solved it. It took me moving my code to a real PC so I could fire up a debugger. It was just too complicated to work through it without one.

                          To err is human. Fortune favors the monsters.

                          G Offline
                          G Offline
                          Gary Wheeler
                          wrote on last edited by
                          #22

                          honey the codewitch wrote:

                          It took me moving my code to a real PC so I could fire up a debugger. It was just too complicated to work through it without one.

                          I like it! I've used this approach on occasion when I "couldn't see the forest for the trees".

                          Software Zen: delete this;

                          H 1 Reply Last reply
                          0
                          • G Gary Wheeler

                            honey the codewitch wrote:

                            It took me moving my code to a real PC so I could fire up a debugger. It was just too complicated to work through it without one.

                            I like it! I've used this approach on occasion when I "couldn't see the forest for the trees".

                            Software Zen: delete this;

                            H Offline
                            H Offline
                            honey the codewitch
                            wrote on last edited by
                            #23

                            It also helps keep it cross platform, so it's win win. I developed most of my GFX library for IoT devices on a PC. I would just draw to in memory bitmaps and then write those to the console as ascii. =)

                            To err is human. Fortune favors the monsters.

                            G 1 Reply Last reply
                            0
                            • H honey the codewitch

                              It also helps keep it cross platform, so it's win win. I developed most of my GFX library for IoT devices on a PC. I would just draw to in memory bitmaps and then write those to the console as ascii. =)

                              To err is human. Fortune favors the monsters.

                              G Offline
                              G Offline
                              Gary Wheeler
                              wrote on last edited by
                              #24

                              honey the codewitch wrote:

                              memory bitmaps and then write those to the console as ascii

                              That triggers a flashback to the early 90's. At the time all of the fonts in our printers were hand-drawn bitmaps. We had a text format where each character's bitmap was drawn using "." for a white pixel, and "@" for a black one. We then had a utility that would convert the text format to the binary form of the font file.

                              Software Zen: delete this;

                              H 1 Reply Last reply
                              0
                              • G Gary Wheeler

                                honey the codewitch wrote:

                                memory bitmaps and then write those to the console as ascii

                                That triggers a flashback to the early 90's. At the time all of the fonts in our printers were hand-drawn bitmaps. We had a text format where each character's bitmap was drawn using "." for a white pixel, and "@" for a black one. We then had a utility that would convert the text format to the binary form of the font file.

                                Software Zen: delete this;

                                H Offline
                                H Offline
                                honey the codewitch
                                wrote on last edited by
                                #25

                                haha I had a python script that created or extracted win 3.1 .FON files in that format for editing or creating. I used it as a reference to create my raster font loading routines.

                                To err is human. Fortune favors the monsters.

                                1 Reply Last reply
                                0
                                • H honey the codewitch

                                  That's what I do essentially as far as the deltas. My pre stream is n contexts where n is the number of tracks. I use those to pull events out in the right order

                                  To err is human. Fortune favors the monsters.

                                  K Offline
                                  K Offline
                                  Kirk 10389821
                                  wrote on last edited by
                                  #26

                                  Isn't this effectively a merge sort from N sources. Each source is already sorted. I admit my ignorance in how you calculate your context. But I assume that EITHER you have an incoming stream of N contexts pre-sorted (which would make outputting it more trivial as you output it in the order it arrives). Or you have an incoming stream with N tracks, where each track is at a specific offset, but the magic is that while track 1 has a small delta, track 3 could have an excessively large delta, so it is played at the right time. And you might not hear from track 3 for some time. I often find it helpful to imagine how they implemented the player at the hardware level. For my take, I would write something that played only 1 of N contexts correctly... Then I would look hard at how to implement simply adding a second context to that. Based on how the data shows up. Because by the time you get to the third or fourth, I think you usually have a decent approach. The other comparison I would make is a Multiplexer. is this similar to CDM or TDM (Code or Time division Multiplexing). Another comparison is Stereo, where you get L+R and a 2L (I forget the actual), but taking -2L + L+R => -L+R + L+R = 2R But they did it that way to take advantage of simpler hardware. I remember learning from that example that coding stuff versus building components you have to think differently about what is easy/hard. == Finally, your problem reminds me of a Computer Engineering Class, where we built circuits that were run through a simulator. The simulator used a queue design, where "events" would trigger through the queue, and the simulator was able to be fast, because it ignored the timing signals, allowing it to "not wait" any time before processing an item. (I got in trouble in the class, because I wrote obscenely inefficient but SIMPLE code, reducing the homework to a TRIVIAL problem, avoiding the timing issues others were busy coding around). anyways, I could envision a queue that is managing N queues of inputs, and only when you take off an item, do you go to the stream to pull in another item.

                                  H 1 Reply Last reply
                                  0
                                  • K Kirk 10389821

                                    Isn't this effectively a merge sort from N sources. Each source is already sorted. I admit my ignorance in how you calculate your context. But I assume that EITHER you have an incoming stream of N contexts pre-sorted (which would make outputting it more trivial as you output it in the order it arrives). Or you have an incoming stream with N tracks, where each track is at a specific offset, but the magic is that while track 1 has a small delta, track 3 could have an excessively large delta, so it is played at the right time. And you might not hear from track 3 for some time. I often find it helpful to imagine how they implemented the player at the hardware level. For my take, I would write something that played only 1 of N contexts correctly... Then I would look hard at how to implement simply adding a second context to that. Based on how the data shows up. Because by the time you get to the third or fourth, I think you usually have a decent approach. The other comparison I would make is a Multiplexer. is this similar to CDM or TDM (Code or Time division Multiplexing). Another comparison is Stereo, where you get L+R and a 2L (I forget the actual), but taking -2L + L+R => -L+R + L+R = 2R But they did it that way to take advantage of simpler hardware. I remember learning from that example that coding stuff versus building components you have to think differently about what is easy/hard. == Finally, your problem reminds me of a Computer Engineering Class, where we built circuits that were run through a simulator. The simulator used a queue design, where "events" would trigger through the queue, and the simulator was able to be fast, because it ignored the timing signals, allowing it to "not wait" any time before processing an item. (I got in trouble in the class, because I wrote obscenely inefficient but SIMPLE code, reducing the homework to a TRIVIAL problem, avoiding the timing issues others were busy coding around). anyways, I could envision a queue that is managing N queues of inputs, and only when you take off an item, do you go to the stream to pull in another item.

                                    H Offline
                                    H Offline
                                    honey the codewitch
                                    wrote on last edited by
                                    #27

                                    Kirk 10389821 wrote:

                                    Finally, your problem reminds me of a Computer Engineering Class, where we built circuits that were run through a simulator. The simulator used a queue design, where "events" would trigger through the queue, and the simulator was able to be fast, because it ignored the timing signals, allowing it to "not wait" any time before processing an item. (I got in trouble in the class, because I wrote obscenely inefficient but SIMPLE code, reducing the homework to a TRIVIAL problem, avoiding the timing issues others were busy coding around). anyways, I could envision a queue that is managing N queues of inputs, and only when you take off an item, do you go to the stream to pull in another item.

                                    You're describing the problem pretty well, which I actually solved last night. I won't paste the implementation here, but here is using it with a queue q Forgive the grotty code. It's just test stuff I've been banging on

                                    #ifndef ARDUINO
                                    #include
                                    #include
                                    #include
                                    #include
                                    #include
                                    #include
                                    using namespace sfx;

                                    void dump_midi(stream* stm, const midi_file& file) {
                                    printf("Type: %d\nTimebase: %d\n",(int)file.type,(int)file.timebase);
                                    printf("Tracks: %d\n",(int)file.tracks_size);
                                    for(int i = 0;i<(int)file.tracks_size;++i) {
                                    printf("\tOffset: %d, Size: %d, Preview: ",(int)file.tracks[i].offset,(int)file.tracks[i].size);
                                    stm->seek(file.tracks[i].offset);
                                    uint8_t buf[16];
                                    size_t tsz = file.tracks[i].size;
                                    size_t sz=stm->read(buf,tsz<16?tsz:16);
                                    for(int j = 0;j

                                    1 Reply Last reply
                                    0
                                    Reply
                                    • Reply as topic
                                    Log in to reply
                                    • Oldest to Newest
                                    • Newest to Oldest
                                    • Most Votes


                                    • Login

                                    • Don't have an account? Register

                                    • Login or register to search.
                                    • First post
                                      Last post
                                    0
                                    • Categories
                                    • Recent
                                    • Tags
                                    • Popular
                                    • World
                                    • Users
                                    • Groups