Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
CODE PROJECT For Those Who Code
  • Home
  • Articles
  • FAQ
Community
  1. Home
  2. The Lounge
  3. Crouching Tiger, Hidden Complexity

Crouching Tiger, Hidden Complexity

Scheduled Pinned Locked Moved The Lounge
algorithmscsshardwareiotperformance
9 Posts 3 Posters 1 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • honey the codewitchH Offline
    honey the codewitchH Offline
    honey the codewitch
    wrote on last edited by
    #1

    Something I run into a lot with IoT is things that seem simple become complicated quickly due to having to run in a constrained environment. Take running a jpeg slideshow off an SD card on a 320kB system with a 320x240 display. Wire up the SD and an ST7789 or ILI9341 and bob's your uncle, one might think. Except for one small wrinkle - a camera produces images in the megapixel range. That display is only .0768 megapixels if my math is right. That means you have to do some sort of resizing on most images. Well, it's all well and good to do bicubic or bilinear resampling of an image if you can access the entire uncompressed image data in a frame buffer, but again, a 320kB system. Good luck. Images must be progressively loaded and then blted more or less directly to the display as it loads - since the display hardware has its own 320x240 display memory on the chip. How that happens depends on the underlying image format. For BMP files progressive loading means going from bottom to top, scanline by scanline. For JPEGS it means getting 8x8 squares of the image at a time, left to right, top to bottom, etc. The issue here is when you're resampling you need to use matrix computations on *overlapping* regions of the image in order to get a proper result, meaning you can't just resize those 8x8 chunks to 6x6 for example and call it good. It would create artifacts every 6 pixels where it wasn't "blended" properly at the edges with the next pixel. So suddenly you need something like maybe a 12x12 intermediary buffer so you can progressively resample, which makes a simple algorithm suddenly messy. And this is just an example. All this for a slideshow.

    Real programmers use butterflies

    S M 2 Replies Last reply
    0
    • honey the codewitchH honey the codewitch

      Something I run into a lot with IoT is things that seem simple become complicated quickly due to having to run in a constrained environment. Take running a jpeg slideshow off an SD card on a 320kB system with a 320x240 display. Wire up the SD and an ST7789 or ILI9341 and bob's your uncle, one might think. Except for one small wrinkle - a camera produces images in the megapixel range. That display is only .0768 megapixels if my math is right. That means you have to do some sort of resizing on most images. Well, it's all well and good to do bicubic or bilinear resampling of an image if you can access the entire uncompressed image data in a frame buffer, but again, a 320kB system. Good luck. Images must be progressively loaded and then blted more or less directly to the display as it loads - since the display hardware has its own 320x240 display memory on the chip. How that happens depends on the underlying image format. For BMP files progressive loading means going from bottom to top, scanline by scanline. For JPEGS it means getting 8x8 squares of the image at a time, left to right, top to bottom, etc. The issue here is when you're resampling you need to use matrix computations on *overlapping* regions of the image in order to get a proper result, meaning you can't just resize those 8x8 chunks to 6x6 for example and call it good. It would create artifacts every 6 pixels where it wasn't "blended" properly at the edges with the next pixel. So suddenly you need something like maybe a 12x12 intermediary buffer so you can progressively resample, which makes a simple algorithm suddenly messy. And this is just an example. All this for a slideshow.

      Real programmers use butterflies

      S Offline
      S Offline
      Super Lloyd
      wrote on last edited by
      #2

      I see.... Nothing that would stop a code witch extraordinaire though, is it?! :-D

      A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!

      honey the codewitchH 1 Reply Last reply
      0
      • S Super Lloyd

        I see.... Nothing that would stop a code witch extraordinaire though, is it?! :-D

        A new .NET Serializer All in one Menu-Ribbon Bar Taking over the world since 1371!

        honey the codewitchH Offline
        honey the codewitchH Offline
        honey the codewitch
        wrote on last edited by
        #3

        *cracks knuckles* *grabs wand* Nothing an eye of newt won't solve.

        Real programmers use butterflies

        1 Reply Last reply
        0
        • honey the codewitchH honey the codewitch

          Something I run into a lot with IoT is things that seem simple become complicated quickly due to having to run in a constrained environment. Take running a jpeg slideshow off an SD card on a 320kB system with a 320x240 display. Wire up the SD and an ST7789 or ILI9341 and bob's your uncle, one might think. Except for one small wrinkle - a camera produces images in the megapixel range. That display is only .0768 megapixels if my math is right. That means you have to do some sort of resizing on most images. Well, it's all well and good to do bicubic or bilinear resampling of an image if you can access the entire uncompressed image data in a frame buffer, but again, a 320kB system. Good luck. Images must be progressively loaded and then blted more or less directly to the display as it loads - since the display hardware has its own 320x240 display memory on the chip. How that happens depends on the underlying image format. For BMP files progressive loading means going from bottom to top, scanline by scanline. For JPEGS it means getting 8x8 squares of the image at a time, left to right, top to bottom, etc. The issue here is when you're resampling you need to use matrix computations on *overlapping* regions of the image in order to get a proper result, meaning you can't just resize those 8x8 chunks to 6x6 for example and call it good. It would create artifacts every 6 pixels where it wasn't "blended" properly at the edges with the next pixel. So suddenly you need something like maybe a 12x12 intermediary buffer so you can progressively resample, which makes a simple algorithm suddenly messy. And this is just an example. All this for a slideshow.

          Real programmers use butterflies

          M Offline
          M Offline
          megaadam
          wrote on last edited by
          #4

          The post title deserves an upvote all by itself. And a fun problem too. Here's a thought. If and only if, you have fast random-access reads to your SD, you could shift your 6x6 convolution window one pixel horizontally at a time, reading a new vertical column of 6 input pixels for each output pixel. This would mean that for the next output row you would read 5/6ths of the same pixels again. Each pixel would be read 6 times. But TBH I would first try discarding pixels, and subjectively examining those results. With such a low res display I would assume the display to have a lower dynamic range as well, and that would help masking the artifacts from an unmathematical downsampling. Or a hybrid, reading 2x2, and discarding. :cool:

          "If we don't change direction, we'll end up where we're going"

          honey the codewitchH 1 Reply Last reply
          0
          • M megaadam

            The post title deserves an upvote all by itself. And a fun problem too. Here's a thought. If and only if, you have fast random-access reads to your SD, you could shift your 6x6 convolution window one pixel horizontally at a time, reading a new vertical column of 6 input pixels for each output pixel. This would mean that for the next output row you would read 5/6ths of the same pixels again. Each pixel would be read 6 times. But TBH I would first try discarding pixels, and subjectively examining those results. With such a low res display I would assume the display to have a lower dynamic range as well, and that would help masking the artifacts from an unmathematical downsampling. Or a hybrid, reading 2x2, and discarding. :cool:

            "If we don't change direction, we'll end up where we're going"

            honey the codewitchH Offline
            honey the codewitchH Offline
            honey the codewitch
            wrote on last edited by
            #5

            I can't shift one pixel horizontally at a time because JPEGs are compressed in 8x8 chunks left to right, top to bottom. Now, I could resize those, but like i said I'd get artifacts. I need to overlap. It's easy enough to do horizontally, but vertically is a problem because I need to store 2ximage_widthx2 bytes worth of pixels to do bicubic sampling vertically. That's a huge problem RAM wise, and it complicates the algorithm significantly. My other option, and I'm not 100% sure about this, is to do two passes over the image, and do the "in betweens" vertically on the second pass. I'm not even sure this will work as so far I only have a vague sketch of the concept in my head, but if possible it will probably be the route I go.

            Real programmers use butterflies

            M 1 Reply Last reply
            0
            • honey the codewitchH honey the codewitch

              I can't shift one pixel horizontally at a time because JPEGs are compressed in 8x8 chunks left to right, top to bottom. Now, I could resize those, but like i said I'd get artifacts. I need to overlap. It's easy enough to do horizontally, but vertically is a problem because I need to store 2ximage_widthx2 bytes worth of pixels to do bicubic sampling vertically. That's a huge problem RAM wise, and it complicates the algorithm significantly. My other option, and I'm not 100% sure about this, is to do two passes over the image, and do the "in betweens" vertically on the second pass. I'm not even sure this will work as so far I only have a vague sketch of the concept in my head, but if possible it will probably be the route I go.

              Real programmers use butterflies

              M Offline
              M Offline
              megaadam
              wrote on last edited by
              #6

              If you can read those 8x8-chunks with random access, would you really need to keep a whole row of chunks in memory? If I've "seen" your challenge correctly you would only need to keep twobytwo of those 8x8-chunks in memory at any time. But with the downside that each chunk (except edge chunks) would have to be read, I think, 2x8 times. And ofc I have no idea how expensive those reads are...

              "If we don't change direction, we'll end up where we're going"

              honey the codewitchH 1 Reply Last reply
              0
              • M megaadam

                If you can read those 8x8-chunks with random access, would you really need to keep a whole row of chunks in memory? If I've "seen" your challenge correctly you would only need to keep twobytwo of those 8x8-chunks in memory at any time. But with the downside that each chunk (except edge chunks) would have to be read, I think, 2x8 times. And ofc I have no idea how expensive those reads are...

                "If we don't change direction, we'll end up where we're going"

                honey the codewitchH Offline
                honey the codewitchH Offline
                honey the codewitch
                wrote on last edited by
                #7

                They're compressed. There is no random access possibility in JPEGs unless I'm mistaken. Adding, I don't really care about load times when it comes to resizing.

                Real programmers use butterflies

                M 1 Reply Last reply
                0
                • honey the codewitchH honey the codewitch

                  They're compressed. There is no random access possibility in JPEGs unless I'm mistaken. Adding, I don't really care about load times when it comes to resizing.

                  Real programmers use butterflies

                  M Offline
                  M Offline
                  megaadam
                  wrote on last edited by
                  #8

                  What if you create an index of all chunks in initial indexing pass? chunk[x, y] => addr

                  "If we don't change direction, we'll end up where we're going"

                  honey the codewitchH 1 Reply Last reply
                  0
                  • M megaadam

                    What if you create an index of all chunks in initial indexing pass? chunk[x, y] => addr

                    "If we don't change direction, we'll end up where we're going"

                    honey the codewitchH Offline
                    honey the codewitchH Offline
                    honey the codewitch
                    wrote on last edited by
                    #9

                    There's an idea. The only thing I'm not sure about is the huffman table. If it stores its compression table progressively I won't be able to seek exactly - I'll have to decompress up to the requested point. I have a feeling i may need to do that anyway. One thing I was thinking of doing is opening the file twice, and scanning through them in tandem with one ahead of the other by one row of 8x8s.

                    Real programmers use butterflies

                    1 Reply Last reply
                    0
                    Reply
                    • Reply as topic
                    Log in to reply
                    • Oldest to Newest
                    • Newest to Oldest
                    • Most Votes


                    • Login

                    • Don't have an account? Register

                    • Login or register to search.
                    • First post
                      Last post
                    0
                    • Categories
                    • Recent
                    • Tags
                    • Popular
                    • World
                    • Users
                    • Groups