LALR vs LR parsing
-
Okay, so I know the table generation algorithms are slightly different in that LALR merges entries and a canonical LR parser will not. My question is, is about the actual *use* of those entries - the final parse table. Is there a difference between how a LALR parser parses and an LR parser *parses*? Specifically, can I use the same parser code regardless of where I got the parse table? (whether from an LR algo or the LALR ago)
Real programmers use butterflies
-
Okay, so I know the table generation algorithms are slightly different in that LALR merges entries and a canonical LR parser will not. My question is, is about the actual *use* of those entries - the final parse table. Is there a difference between how a LALR parser parses and an LR parser *parses*? Specifically, can I use the same parser code regardless of where I got the parse table? (whether from an LR algo or the LALR ago)
Real programmers use butterflies
shift/reduce parsing: the difference between LALR (1) and LR (1) is that LALR is based on an LR (0) automaton. Operationally they are the same: shift when you have to shift, reduce when you have to reduce and give an error message when there is a mismatch between the set of expected tokens and the received token
-
shift/reduce parsing: the difference between LALR (1) and LR (1) is that LALR is based on an LR (0) automaton. Operationally they are the same: shift when you have to shift, reduce when you have to reduce and give an error message when there is a mismatch between the set of expected tokens and the received token
It's based on it, but it still has partial look ahead so I'd same it's somewhere between LR(0) and LR(1) But then maybe that's arguing semantics, i don't know but that's how I see it. In expressive power it bakes out that way anyway. And I use it that way because LALR(1) is not referred to as LALR(0). Same with SLR(1) - which i don't think anyone uses because there's little point, but academically i mean.
Real programmers use butterflies
-
shift/reduce parsing: the difference between LALR (1) and LR (1) is that LALR is based on an LR (0) automaton. Operationally they are the same: shift when you have to shift, reduce when you have to reduce and give an error message when there is a mismatch between the set of expected tokens and the received token
anyway thanks for your answer, i figured it out eventually the day i asked it but i didn't think to close the question (and i don't know if that's expected)
Real programmers use butterflies