Provide metrics considering the execution of unit test, like number of errors/failures, success rate etc.
There is a lot of variety when it comes to running automated test for Python. That variety is caused by differences in source layout, environment setup, used test framework etc.
Long story short: We are outside of the java/maven universe, we cannot make strong assumptions about projects layout etc. and thus cannot provide a fully automated solution in the plugin. Instead, we leave the aspect that varies – calling the tests and capturing the output – on the project's site, where it belongs to. The plugin is only responsible for parsing the report and feeding the data into Sonar. We choose JUnitReport XML format because of his popularity.
The format has some drawbacks though, most notably:
1. it assumes that each function(=method) belongs to a class (Its originates from a "J"-tool, after all)
2. it doesnt explicitly provide the path to the source file (That hurts in every environment where you cannot definitely map the class name to a source file)
Despite this drawbacks it should be a goal to not move away from this (quasi)standard format. The plan for dealing with the second drawback is:
a) use the knowledge of the lexer/parser to map the class names to according source files. That should work in most cases.
b) as a fallback, provide the possibility to inject the path to the source file via an OPTIONAL 'source' attribute of the 'testcase' tag.
c) If the source file cannot be found using both ways, just create a 'virtual' one with the content "sources cannot be found" or similar.
d) To be precise about our format expectations, have a reference to a grammar in the dox which can be used to verify the validity of a report.