Testing

So demos are nice, but we'd better be sure to test our updated logic. And as we noted, our previous simulation lacked any asynchronous effects, so lets add those.

(defn check-invariants [session] (let [moves (map :?move (rules/query-partial session ::simple/move)) move-requests (map :?request (rules/query-partial session ::simple/move-request)) game-over (rules/query-one :?game-over session ::simple/game-over) winner (rules/query-one :?player session ::simple/winner)] (let [counts (into {} (map (juxt key (comp count val)) (group-by ::simple/player moves))) xs (or (:x counts) 0) os (or (:o counts) 0)] ; Make sure we don't have any extra moves. :x goes first so should be ; either one ahead or equal to :o. (when (or (< 1 (- xs os)) (> 0 (- xs os))) (throw (ex-info "Invariant violation: extra moves" {:counts counts})))) ; If all the squares are full, the game should be over. (when (= 9 (count moves)) (when (not game-over) (throw (ex-info "Invariant violation: game should over" {})))) ; Smart AI should never lose (when smart-ai (when (= :o winner) (throw (ex-info "Invariant violation: smart AI should never lose" {})))) ; In teaching mode, the user should never lose (when teaching-mode (when (= :x winner) (throw (ex-info "Invariant violation: human should never lose in teaching-mode" {})))) (let [mr-pos (set (map ::simple/position move-requests)) m-pos (set (map ::simple/position moves))] ; Can't request a move for a square that's already been used. (when (not-empty (set/intersection mr-pos m-pos)) (throw (ex-info "Invariant violation: moves and move requests should not have overlapping positions" {})))))) (defn abuse-async [session-atom iterations delay-ms] (add-watch session-atom :check-invariants (fn [_ _ _ session] (check-invariants session))) (async/go (enable-console-print!) (loop [i 0] (if (< i iterations) (if (rules/query-one :?game-over @session-atom ::simple/game-over) ; If the game is over, just reset (let [req (rules/query-one :?request @session-atom ::simple/reset-request)] (if req (do (common/respond-to req) (<! (async/timeout (* delay-ms (rand)))) (recur (inc i))) (throw (ex-info "Should have reset request for game over" {})))) (let [reqs (rules/query-partial @session-atom ::simple/move-request :?player :o)] (if (not-empty reqs) (let [req (:?request (rand-nth reqs))] (common/respond-to req req) (<! (async/timeout (* delay-ms (rand)))) (recur (inc i))) (if (> 0.01 (rand)) (do (async/go (<! (async/timeout (* delay-ms (rand)))) (when-let [req (rules/query-one :?request @session-atom ::simple/reset-request)] (common/respond-to req))) (<! (async/timeout (* delay-ms (rand)))) (recur (inc i))) ; If there were no valid :o moves and we didn't reset, wait and recur. ; :x will get it together and move sooner or later. (do (<! (async/timeout (* delay-ms (rand)))) (recur i)))))) (remove-watch session-atom :check-invariants)))))

check-invariants introduces a couple of new invariants, basically ensuring that the computer never loses in when using smart AI, and the user never loses in teaching mode. The abuse-async function is a bit more involved than our previous abuse-simple. We've added a parameter delay-ms, the maximum delay between requests. Since the whole thing is async, the main processing loop is now embedded in a clojure.core.async/go block. The simulation logic is similar, but we now let the AI service handle :x moves, so only deal with :o moves in the simulation.

If we set ai-latency and delay-ms to zero, 10000 iterations takes about 60 seconds. But that's not really the interesting case. To get a more realistic test, we want non-zero values for both ai-latency and delay-ms, chosen so that there is some chance of overlapping requests. We often run it using (abuse-async session-atom 10000 (* 0.5 ai-latency)). Of course in this case the run time is dominated by the delays, so you'll want to fire it up and get a cup of coffee.

At this point, a question may have occurred to you: if we're interacting with the AI effect in our automated test code, and all effects are equal, then why can't we also hook up the simulation to the UI effect? Let's do it and see what happens.

For about the first half of the video, we're running this simulation with a 500ms ai-latency and delay-ms, so we can see the UI change. The second half drops the latency to 20ms, just to show that we can really cover some ground in terms of simulation if desirable and practical. It works precisely as expected. So we see how the R³ also facilitates integration testing. In the current example, we're only testing invariants that result from the AI, and "testing" the UI by eye (which isn't such a bad thing). We could actually write some UI tests as well, querying for elements or whatever, similar to what you might do with something like Selenium.

We can even get wackier. We actually should be able to run the simulation and also interact with the UI. If nothing else, it demonstrates that our rules are maintaining logical consistency, because responses will be getting sent for different requests from the simulation and UI, yet we shouldn't see any exceptions from the invariants, or general rendering weirdness.

results matching ""

    No results matching ""