Shot in the dark for any photoshop users out there
-
I'm not sure if this will bear any fruit, but I'm asking on the off chance someone might know what I am going on about below. I understand alpha-blending (pixels w/ opacity) I don't understand compositing modes They are something like these, but in my codebase they are not called this. Blend modes - Wikipedia[^]
static const composition_function_t composition_solid_table[] = {
composition_solid_source,
composition_solid_source_over,
composition_solid_destination_in,
composition_solid_destination_out,
};static const composition_function_t composition_table[] = {
composition_source,
composition_source_over,
composition_destination_in,
composition_destination_out,
};These are the blend/compositing modes as they appear in my codebase. The implementation of those functions is here, in case anyone's interested plutovg/source/plutovg-blend.c at main · sammycage/plutovg · GitHub[^] I need to adapt them so I need to understand them. I've tried following the code, but obviously I'm not doing it right because when I try to adapt it to do read/writes to RGB565 bitmaps some of my colors are coming back wrong. For the most part I am using "composition_solid_source" and composition_solid_source_over" and the "source over" is what is confusing to me. I don't know what that means. Any graphics afficionados/nadas be able to explain this to me in high level terms?
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
In photoshop, you can superpose different layers, and decide for each how it will "blend" with the layers below. The simplest blend way is to simply superpose the layer content, so you see at first what is on the layer on top, then what is on the layer below it but for what is on the first layer, and so on. Like a pile of cards that are not aligned, you'll see the top card and bits of the cards below it if you look from above. And then you can apply a formula for each layer blending, that can be different for each. For the pile of cards, you can say that you want to merge only the red colors in the final rendering, or the pixels that are above a certain saturation value, or below a certain brightness, and ignore all other, or multiply the pixel values. If you have a pile of blue and red cards, and only merge red, you'll end up with a solid big chunk of merge red cards surrounded by bits of untouched blue cards below (assuming the top card is red). As you pointed out, each of the layers have an opacity value (alpha-blending) which is independent from the blending mode. It is quite easy to grasp if you can get in touch with a photoshop version : simply create a bunch of layers with different geometrical shapes on them, and play with the predefined blending/compistion mode for each layer.
-
I think I grok Photoshop quite well. I think the Wiki does a fine job of explaining the plethora of blend modes. All PS blend modes produce one resulting RGBA layer, from two RGBA layers in. It seems to me, after (possibly too) quick glance, that the functions you point to, take one RGBA in. I assume you know this: The multiplication of R * R is not a direct integer MUL. It is a "normalised" MUL. i.e.
0x80 * 0x80 is not 0x4000
it must be seen as0.5 * 0.5 == 0.25
"If we don't change direction, we'll end up where we're going"
-
Sorry if this sounds dumb but, how can 0x80 become 0.5 ?
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
0x80 is 128 which is half of 256, which is a byte. I have a function called BYTE_MUL() in my code that does multiplication by 1/255 of a value, in which case 0x80 is about half.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
In photoshop, you can superpose different layers, and decide for each how it will "blend" with the layers below. The simplest blend way is to simply superpose the layer content, so you see at first what is on the layer on top, then what is on the layer below it but for what is on the first layer, and so on. Like a pile of cards that are not aligned, you'll see the top card and bits of the cards below it if you look from above. And then you can apply a formula for each layer blending, that can be different for each. For the pile of cards, you can say that you want to merge only the red colors in the final rendering, or the pixels that are above a certain saturation value, or below a certain brightness, and ignore all other, or multiply the pixel values. If you have a pile of blue and red cards, and only merge red, you'll end up with a solid big chunk of merge red cards surrounded by bits of untouched blue cards below (assuming the top card is red). As you pointed out, each of the layers have an opacity value (alpha-blending) which is independent from the blending mode. It is quite easy to grasp if you can get in touch with a photoshop version : simply create a bunch of layers with different geometrical shapes on them, and play with the predefined blending/compistion mode for each layer.
In essence: This function works, but it uses callbacks, and does the alpha blending during the callback's destination call:
static void composition_solid_source_over(const composition_params& params)
{
uint32_t color=params.color;
if(params.const_alpha != 255) color = BYTE_MUL(params.color, params.const_alpha);
uint32_t ialpha = 255 - plutovg_alpha(color);
for(int i = 0; i < params.length; i++) {
uint32_t col;
if(params.canvas->read_callback!=nullptr) {
::gfx::vector_pixel c;
params.canvas->read_callback(::gfx::point16(params.x+i,params.y),&c,params.canvas->callback_state);
col = color + BYTE_MUL(c.native_value, ialpha);
} else {
col = color;
}
if(params.canvas->write_callback!=nullptr) {
params.canvas->write_callback(::gfx::rect16(params.x+i,params.y,params.x+i,params.y),::gfx::vector_pixel(col,true),params.canvas->callback_state);
}
}
}This one works too, even though it could be more efficient if I updated BYTE_MUL() to handle RGBA instead of ARGB:
static void direct_rgba32_composition_solid_source_over(const composition_params& params)
{
uint32_t* dest = (uint32_t*)(((uint8_t*)params.direct_rgba32) + (params.y * params.stride)) + params.x;
uint32_t color = params.color;
if(params.const_alpha != 255) color = BYTE_MUL(params.color, params.const_alpha);
uint32_t ialpha = 255 - plutovg_alpha(color);
for(int i = 0; i < params.length; i++) {
dest[i] = vector_to_rgba32(color + BYTE_MUL(rgba32_to_vector(dest[i]), ialpha));
}
}This does not:
static void direct_rgb16_composition_solid_source_over(const composition_params& params)
{
uint16_t* dest = (uint16_t*)(((uint8_t*)params.direct_rgb16) + (params.y * params.stride)) + params.x;
uint32_t color=params.color;
if(params.const_alpha != 255) color = BYTE_MUL(params.color, params.const_alpha);
uint32_t ialpha = 255 - plutovg_alpha(color);
for(int i = 0; i < params.length; i++) {uint32\_t read\_col=rgb16\_to\_vector(dest\[i\]); uint32\_t col = color + BYTE\_MUL(read\_col, ialpha); dest\[i\]=vector\_to\_rgb16(col); }
}
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
0x80 is 128 which is half of 256, which is a byte. I have a function called BYTE_MUL() in my code that does multiplication by 1/255 of a value, in which case 0x80 is about half.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
In essence: This function works, but it uses callbacks, and does the alpha blending during the callback's destination call:
static void composition_solid_source_over(const composition_params& params)
{
uint32_t color=params.color;
if(params.const_alpha != 255) color = BYTE_MUL(params.color, params.const_alpha);
uint32_t ialpha = 255 - plutovg_alpha(color);
for(int i = 0; i < params.length; i++) {
uint32_t col;
if(params.canvas->read_callback!=nullptr) {
::gfx::vector_pixel c;
params.canvas->read_callback(::gfx::point16(params.x+i,params.y),&c,params.canvas->callback_state);
col = color + BYTE_MUL(c.native_value, ialpha);
} else {
col = color;
}
if(params.canvas->write_callback!=nullptr) {
params.canvas->write_callback(::gfx::rect16(params.x+i,params.y,params.x+i,params.y),::gfx::vector_pixel(col,true),params.canvas->callback_state);
}
}
}This one works too, even though it could be more efficient if I updated BYTE_MUL() to handle RGBA instead of ARGB:
static void direct_rgba32_composition_solid_source_over(const composition_params& params)
{
uint32_t* dest = (uint32_t*)(((uint8_t*)params.direct_rgba32) + (params.y * params.stride)) + params.x;
uint32_t color = params.color;
if(params.const_alpha != 255) color = BYTE_MUL(params.color, params.const_alpha);
uint32_t ialpha = 255 - plutovg_alpha(color);
for(int i = 0; i < params.length; i++) {
dest[i] = vector_to_rgba32(color + BYTE_MUL(rgba32_to_vector(dest[i]), ialpha));
}
}This does not:
static void direct_rgb16_composition_solid_source_over(const composition_params& params)
{
uint16_t* dest = (uint16_t*)(((uint8_t*)params.direct_rgb16) + (params.y * params.stride)) + params.x;
uint32_t color=params.color;
if(params.const_alpha != 255) color = BYTE_MUL(params.color, params.const_alpha);
uint32_t ialpha = 255 - plutovg_alpha(color);
for(int i = 0; i < params.length; i++) {uint32\_t read\_col=rgb16\_to\_vector(dest\[i\]); uint32\_t col = color + BYTE\_MUL(read\_col, ialpha); dest\[i\]=vector\_to\_rgb16(col); }
}
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
honey the codewitch wrote:
This does not:
Yes, because
uint32_t read_col=rgb16_to_vector(dest[i]);
probably causes information loss. -
0x80 is 128 which is half of 256, which is a byte. I have a function called BYTE_MUL() in my code that does multiplication by 1/255 of a value, in which case 0x80 is about half.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
The important bit was the first part "two input layers for Photoshop blending". I saw a function with only one input layer. The
0x80
was just a safety check. I assumed you knew, and you did, so yes:0xff == 255/255, 0x80 == 128/255 ≈ 1/2, 0x01 == 1/255
"If we don't change direction, we'll end up where we're going"
-
In photoshop, you can superpose different layers, and decide for each how it will "blend" with the layers below. The simplest blend way is to simply superpose the layer content, so you see at first what is on the layer on top, then what is on the layer below it but for what is on the first layer, and so on. Like a pile of cards that are not aligned, you'll see the top card and bits of the cards below it if you look from above. And then you can apply a formula for each layer blending, that can be different for each. For the pile of cards, you can say that you want to merge only the red colors in the final rendering, or the pixels that are above a certain saturation value, or below a certain brightness, and ignore all other, or multiply the pixel values. If you have a pile of blue and red cards, and only merge red, you'll end up with a solid big chunk of merge red cards surrounded by bits of untouched blue cards below (assuming the top card is red). As you pointed out, each of the layers have an opacity value (alpha-blending) which is independent from the blending mode. It is quite easy to grasp if you can get in touch with a photoshop version : simply create a bunch of layers with different geometrical shapes on them, and play with the predefined blending/compistion mode for each layer.
-
Do not bother with Adobes monster portals. Get GIMP, free, it has the same-ish blend modes, even if naming might vary.
"If we don't change direction, we'll end up where we're going"
Nah, it is like all these MSOffice clones : good, but does not top the real thing. Actually, I am still using Photshop 7 - No online licensing required, and just enough features for me to be able to know most of them.
-
honey the codewitch wrote:
This does not:
Yes, because
uint32_t read_col=rgb16_to_vector(dest[i]);
probably causes information loss.Nah. It does cause information loss, but that's not what I'm experiencing. What I'm experiencing is outright corruption. Corrupt image[^] If it was just losing fidelity, I would see the edges of the shading in gradients and such, but they wouldn't be the wrong entire color.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
Nah, it is like all these MSOffice clones : good, but does not top the real thing. Actually, I am still using Photshop 7 - No online licensing required, and just enough features for me to be able to know most of them.
-
We are not at all trying to "top" PS though. We are trying to avoid the Adobe portal. I was unaware that PS7 is easy to get.
"If we don't change direction, we'll end up where we're going"
megaadam wrote:
I was unaware that PS7 is easy to get.
Want a copy? :-D I don't know where/how it might be available online these days. I bought my copy in 2003 -- from an online "OEM" seller.