Like --args <file>, this allow a list of lists of arguments to be provided,
with each element of the outer list representing a separate run of PRISM
to be performed on all benchmarks. Here, these can be provided directly
on the command-line, as a comma-separated list, rather than creating a file.
For example:
prism-auto ... --args-list '-gs,-jac,-jor -omega 0.9'
The prism-log-extract script extracts and collates info from a
collection of PRISM log files.
The basic usage is "prism-log-extract <targets>" where <targets>
is one or more log files or directories containing log files.
The default behaviour is to extract all known fields from all logs
and then print the resulting table of values in CSV format.
Run "prism-log-extract --help" for details of further options.
New switch --filter-models X which restricts the list of models used from
a directory to those that match the filter X. Currently, this can refer to
the number of states and/or the model type. Examples:
* prism-auto . --filter-models "states>100 and states<10000"
* prism-auto . --filter-models "model_type=='DTMC'"
* prism-auto . --filter-models "'MC' in model_type"
The model metadata is by default read in from a models.csv file (as found,
for example, in the PRISM benchmark suite). The name of the file used can
be changed with --models-info FILE.
When using the -l (--log) option and also the -a (--args) option,
subirectories are created for each entry in the args file.
This makes it easier to process the log files afterwards.
Relatedly, if the directory specified in the -l switch, or the required
subdirectories, do not exist, there is now no error and they are created.
We now count non-convergence (i.e., error message contains 'did not converge') as a sub-type of failures.
Additionally, count skipped export-runs and skipped duplicate runs as sub-types of skipped tests.
Print test statistics also in the (not particularly useful) case that the timeout is set to 0 by testing against None instead to see if a timeout was set.
The occurence of a line with 'Error:' does not necessarily imply a
failed test result, e.g., for test cases that test against the error
messages.
So we revert the previous change related to the printing of 'Error' lines
and tweak the handling in verbose-test mode some more.
In non-test mode, PRISM and prism-auto both write to stdout, without
prism-auto seeing/processing the output of PRISM. If the output of
prism-auto is piped to another program or to a file, the prism-auto
output is buffered. Then, the output by prism-auto (e.g., printing the
command lines) is not properly synchronized with the output of the
PRISM instances.
So, we flush stdout at appropriate locations.
Additionally, on timeout we prepend a '\n' to ensure that the timeout
message starts at a new line (in particular for the common case of a
timeout during explicit model building, where there is no newline from
PRISM until the model is fully built).
When using -x to add additional options, i.e., to force a specific engine,
often runs of PRISM are effectively duplicated, e.g., if there are .args files
that select multiple engines.
Using the --skip-duplicate-runs, prism-auto tries to clean up the argument list,
remove switches that don't have an effect and to detect duplicate runs, only actually
executing the first one.
Option to allow skipping PRISM runs that do exports.
This is useful when overriding the engine, as the exports differ slightly
between engines and some export options are not supported.
In nailgun mode, if --ngprism is not specified, we derive the location of the ngprism
binary from the --prog setting: We simply replace the file part with ngprism, as usually
the ngprism binary is located in the same directory as the prism startup script.
Sometimes, if the prism-auto scripts gets interrupted, an existing
nailgun server is not properly shut down and might break a subsequent
prism-auto run.
(1) use dedicated printColoured method, which suppresses
colouring if isColourEnabled() is false (e.g., when piping to a file)
(2) use dedicated colour (light red) for warnings, as before
git-svn-id: https://www.prismmodelchecker.org/svn/prism/prism/trunk@12003 bbc10eb1-c90d-0410-af57-cb519fbb1720
The previously used filecmp.cmp opens the files to be compared in 'rb'
mode, i.e., it will tell us that two files that differ only in the
line-ending encoding (CRLF vs LF) are not equivalent. However, we'd
like to get the export tests to succeed on Windows, regardless of the
line endings. So, we provide our own file comparison method that opens
the file in 'rU' mode (universal newline mode), which converts all the
newline encodings to '\n' transparently.
git-svn-id: https://www.prismmodelchecker.org/svn/prism/prism/trunk@11880 bbc10eb1-c90d-0410-af57-cb519fbb1720
Somehow, PRISM can not open a NamedTemporaryFile created on Windows
(see issue prismmodelchecker/prism#11) when passed the filename via
the -mainlog parameter.
So, on Windows, we fall back on the old method of capturing stdout
directly via the Popen call. As this does not work with nailgun (the
C printfs go to the nailgun server stdout), we currently don't allow
nailgun use on Windows.
git-svn-id: https://www.prismmodelchecker.org/svn/prism/prism/trunk@11879 bbc10eb1-c90d-0410-af57-cb519fbb1720