Browse Source

Fix bug in sparse MDP reachability reward interval iteration

Due to a typo, the upper iteration uses `updateValueFromBelow` instead of `updateValueFromAbove`
to update the value vector. When the flag for monotonicity enforcement from above is active (default),
all values are thus forced to be above the upper bound, preventing convergence.

Example:
prism functionality/verify/mdps/reach/mdp_simple.nm functionality/verify/mdps/reach/mdp_simple.nm.props -sparse -intervaliter

+ test case
master
Joachim Klein 8 years ago
parent
commit
7fc6878dc6
  1. 5
      prism-tests/functionality/verify/mdps/reach/mdp_simple.nm.props.args
  2. 2
      prism/src/sparse/PS_NondetReachRewardInterval.cc

5
prism-tests/functionality/verify/mdps/reach/mdp_simple.nm.props.args

@ -5,3 +5,8 @@
-ex -gs
-ex -politer
-ex -modpoliter
-s -ii
-m -ii
-h -ii
-ex -valiter -ii
-ex -gs -ii

2
prism/src/sparse/PS_NondetReachRewardInterval.cc

@ -395,7 +395,7 @@ jint flags
}
// set vector element
// (if there were no choices from this state, reward is zero/infinity)
helper.updateValueFromBelow(soln_above2[i], soln_above[i], (h1 > l1) ? d1 : inf_vec[i] > 0 ? HUGE_VAL : 0);
helper.updateValueFromAbove(soln_above2[i], soln_above[i], (h1 > l1) ? d1 : inf_vec[i] > 0 ? HUGE_VAL : 0);
}
if (iterationExport) {

Loading…
Cancel
Save