Novel approach improves automatic software repair by generating test cases
-
TechXplore[^]:
IMDEA Software researchers Facundo Molina, Juan Manuel Copia and Alessandra Gorla present FIXCHECK, a novel approach to improve patch fix analysis that combines static analysis, randomized testing and large language models.
Why just get the AI to write the code when you can get it to write the tests to prove it works?
What could possibly go right?
-
TechXplore[^]:
IMDEA Software researchers Facundo Molina, Juan Manuel Copia and Alessandra Gorla present FIXCHECK, a novel approach to improve patch fix analysis that combines static analysis, randomized testing and large language models.
Why just get the AI to write the code when you can get it to write the tests to prove it works?
What could possibly go right?
Heh. Haven't read just yet. I'm aware of some folks who would definitely not see it this way, but I think auto-generation of tests is a pretty good idea. A big reason though (maybe irony?) is that we cannot get this bit wrong. When we mess up a test it's like we did worse than any other kind of mess up and we did it on code we don't even have to have. What if when I clicked Post Message, I held down a magic key combo which caused a branching on the repo with a new commit that contained tests across all logic that click caused to run?