• Google Code Jam 2018: Practice session results

    If you’re not familiar with Google Code Jam, it’s a yearly competition organized by Google where contestants compete in solving programming problems. You are given a problem and some limitations, e.g. only a few hours to prepare solutions in later stages of the event, runtime and memory constraints for programs. It’s a fairly popular contest, bringing in thousands of participants from all over the world, with World Finals taking place in Toronto and the best contestant taking home a $15,000 grand prize, fame and glory.

    In the practice session of 2018 there were 4 problems and 2 days were given to solve them. You can find the full list of problems and their descriptions HERE.

    1. Number Guessing

    Here’s the full problem description.

    But in short, the judge program comes up with a secret number in a given range and allows you to make no more than a specific number of guesses. Your program has to guess what the number is, and after each guess the judge will respond with “TOO_BIG”, “TOO_SMALL” or “CORRECT”. If your program fails to guess the number correctly, the judge will respond with “WRONG_ANSWER” and will stop replying.

    Binary search

    This immediately sounds like binary search, doesn’t it? You have to guess a number in 1-dimensional space, and the only feedback judge provies is if we guessed to high or too low. So the best we can do is to always guess in the middle of the available space, discarding the half where we know our target is not in. Based on the image example above, it would take 4 guesses to find the right answer using this strategy: binary search has an average performance of O(log(n)).

    My first stumble was in picking the right midpoint. Initial one was the average of lower and upper bounds midpoint = (lower_bound + upper_bound) // 2. Integer division is there to avoid producing a decimal number (e.g. (1 + 4) / 2 == 2.5, but (1 + 4) // 2 == 2). This passed the testing_tool.py, but failed hidden tests, failing to find the answer in required number of guesses. Wiki article also suggests using floor of (lower_bound + upper_bound) / 2, but taking ceil instead passed the tests: midpoint = (lower_bound + upper_bound + 1) // 2.

    The second stumble was in not terminating the program as specified when the judge responded with WRONG_ANSWER. It wasn’t quite clear whether on the last answer it would respond with TOO_BIG and then WRONG_ANSWER, or just WRONG_ANSWER, so I covered for both, yielding this code:

    def run():
        test_cases = read_int()
        for case in range(test_cases):
            case += 1
            [lower_bound, upper_bound] = read_ints()
            guesses = read_int()
    
            guessed_right = False
    
            for guess in range(guesses):
                midpoint = (lower_bound + upper_bound + 1) // 2
                write_guess(midpoint)
    
                answer = read_answer()
    
                if answer == TOO_BIG:
                    upper_bound = middle
                elif answer == TOO_SMALL:
                    lower_bound = middle
                elif answer == CORRECT:
                    guessed_right = True
                    break
                elif answer == WRONG_ANSWER:
                    exit()
    
            if guessed_right == False:
                answer = read_answer()
                exit()
    

    I skipped definitions for read_int(), read_ints(), read_answer(), write_guess() - they’re basic stdin and stdout operations.

    2. Senate evacuation

    Here’s the full problem description.

    In short, there are a number of political parties in senate with some number of senators in each party. We want to evacuate all senators in such a way that neither party has the majority at any step (>50% of senators still in senate), while we can evacuate only one or two senators at a time.

    Or to rephrase the problem slightly differently: we have a list of integers, we can subtract 1 or 2 at a time from any member in such a way that neither member is bigger than 50% of sum of all members. Let’s start with checking the current state for validity and encode this rephrased version in code:

    def is_valid(senators):
        total_senators = sum(senators)
        limit = total_senators / 2
        return all(map(lambda p: p <= limit, senators))
    

    For the strategy itself, initial hunch suggests to attack parties with most senators first. We will always be able to evacuate at least one senator. As for the second one, after selecting the first candidate we can check again which party has most members. If after removing it the state is still legal (using is_valid()) - then we can take two senators. If not - we have to take just one. We just have to take into account a case when there are no senators left at all ([0, 0, ..., 0]), or just one ([1, 0, 0, ..., 0]). In code this idea can be written as:

    def solve(parties, senators):
        solution = []
        most_senators = max(senators)
        while most_senators > 0:
            first_target = senators.index(most_senators)
            senators[first_target] -= 1
    
            second_largest_party = max(senators)
            second_target = senators.index(second_largest_party)
            senators[second_target] -= 1
    
            if second_largest_party == 0 or not is_valid(senators):
                # Can take only one senator, must put second one back
                solution.append(alphabet[first_target])
                senators[second_target] += 1
            else:
                # Can take both senators
                solution.append(alphabet[first_target] + alphabet[second_target])
    
            most_senators = max(senators)
        return solution
    

    And to transform political party numbers back into alphabetical party names: alphabet = dict(enumerate('ABCDEFGHIJKLMNOPQRSTUVWXYZ')).

    With final run() written as:

    def run():
        test_cases = read_int()
        for case in range(test_cases):
            case += 1
            N = read_int()
            senators = read_ints()
            solution = solve(N, senators)
    
            write_answer('Case #%d: %s' % (case, ' '.join(solution)))
    

    3. Steed 2: Cruise Control

    Here’s the full problem description.

    In short, on a one-way road there are a number of horses with some initial positions and all travelling in the same direction at some individual speeds. However, these horses move in a special way in that they cannot pass each other. So if one catches up to another, they’ll both continue travelling at the slower one’s speed. Our hero wants to travel to a specific point on the same road and has to abide same rules of not passing another horse. The hero can travel at any speed, but wants to maintain a constant speed throughout the full travel without having to speed up or slow down. What is the maximum speed the hero can choose?

    The first hunch was to try and calculate for how long each horse will travel at what speed, and try to deduce hero’s target speed from that. But the problem mentions that it may use up to 1000 horses. At least we can imagine describing a situation where we put fastest horses at the start, and slowest ones near the end, and faster horses gradually running into slower ones. It quickly becomes a complex mess.

    The second idea was to compute how long it will take for each horse to get to hero’s destination, but do it from right to left. Right to left, because horses to the right will always be limiting factors - horses to the left can’t get to the destination faster than the horse to the right, they’d eventually catch up and will have to slow down, so their journey will take at least as long as the horse’s to the right. And turns out that this approach allows for fairly straightforward linear computation.

    So for each horse we compute how long it would take them to cover the remaining distance from their starting point to the destination. If we take the max() of these durations, that will give us our hero’s travel duration. From there we can find the average speed with average_speed = distance / duration.

    def solve(distance, total_horses, horses):
        longest_duration = max(map(lambda h: (distance - h[0]) / h[1], horses.values()))
        return distance / longest_duration
    
    def run():
        test_cases = read_int()
        for case in range(test_cases):
            case += 1
            [distance, total_horses] = read_ints()
            horses = {}
            for i in range(total_horses):
                [initial, max_speed] = read_ints()
                horses[i] = [initial, max_speed]
            # print([distance, total_horses, horses])
            solution = solve(distance, total_horses, horses)
    
            write_answer('Case #%d: %.7f' % (case, solution))
    

    4. Bathroom Stalls

    Here’s the full problem description.

    In short, there are N+2 bathroom stalls, with first and last one already occupied. A number of people will come in one after another, selecting a stall and never leaving. People will select a stall that is furthest from its closest neighbors. The goal is to find out, when the last person takes a stall, what’s the largest and lowest number of free stalls next to the side of it.

    This one was rough, taking about 4 hours to figure out, but in the end the solution was mathematical and fairly elegant, at least in my eyes.

    The initial idea would be to maybe try and write a program that puts people in the appropriate stalls sequentially one by one. But the problem arises from constraints: the number of stalls can be as large as 10^18, and number of people can be equal to the number of stalls.

    How would such data structure be represented? If we represented each stall as 1 byte, that’s still 10^18 bytes - exabyte, for comparison a gigabyte is about 10^9, so there’s no way we’re storing this much information, in memory or on disk. northeastern.edu suggests that in 2013 total amount of data in the world was 4.4 zettabytes, or 4400 exabytes. We certainly won’t dedicate 0.2% of world’s storage to solve this problem. And If each stall was represented as a bit, divide the numbers by 8, but they’re still gargantuan and unmanageable numbers.

    Especially note that the execution time limit is 30 seconds. Which suggests there has to be a better way.

    The next idea was that maybe it could be related to fractals - similar repeating patterns at different scales. E.g. Here’s how 7 stalls would be filled up:

    1|2|3|4|5|6|7  1|2|3|4|5|6|7
    -------------  -------------
    _|_|_|1|_|_|_  _|_|_|1|_|_|_
    _|2|_|_|_|_|_  _|2|_|1|_|_|_
    _|_|_|_|_|3|_  _|2|_|1|_|3|_
    4|_|_|_|_|_|_  4|2|_|1|_|3|_
    _|_|5|_|_|_|_  4|2|5|1|_|3|_
    _|_|_|_|6|_|_  4|2|5|1|6|3|_
    _|_|_|_|_|_|7  4|2|5|1|6|3|7
    

    Maybe the same pattern that governs selection of one could be repeated on its left branch to position 2, resulting maybe in a tree of sorts. But that still does not suggest a way how to determine number of free stalls on each position’s side. So the third option was to brute force and write down free stall numbers for each N.

    Stall positioning

    In the graph the columns denote N - how many stalls there are. Rows denote how many people have already selected a stall. And (stalls, people) combination denotes A - the largest number of stalls available next to last person’s stall, and B - the lowest number of stalls next to the last selected stall.

    Right away we see an interesting trend. E.g. take a look at the first row. As columns grow from left to right, first A increases by one, then B catches up on the next column. And it repeats.

    On the second row it’s very similar. First A increases by one, then B catches up, but this rate of change is slower, only every two columns as opposed to with every one.

    It’s getting difficult to follow, as we’re running out of columns to clearly see the trend, but on the fourth row the rate of change is even slower, with A and B incrementing only once every 4 columns.

    Let’s add some visual aid:

    Stall positioning with markings

    It would be a bit clearer if we had 16 rows and 16 columns, but this suffices to spot the trend.

    We can see that rate of change also changes based on the row, and such rates of change can be grouped based on powers of two:

    • 2^0 = 1 - one row will change on every column;
    • 2^1 = 2 - there are 2 rows in the group, so A and B wil change every 2 columns on rows 2 and 3;
    • 2^2 = 2 - there are 4 rows in the group, A and B will change every 4 columns on rows 4-7;
    • 2^3 = 8 - there are 8 rows in the group, A and B will change every 8 columns on rows 8-15;

    And so on. We have found a pattern. Now how to calculate A and B if we are given the row and column number?

    Rate of change

    First we need to find every how many columns A and B changes based on the row. Since groups grow as powers of two, we could take floor(log2(row)) to find in which group a row is. E.g. floor(log2(9)) = 3, which means we are in the fourth group (indexed from 0, see list above). If we raise the base 2 back to the power of 3 - this gives us rate of change 8, which is what we would expect as per our pattern we’ve seen above.

    So the derived formula for rate of change seems to be rate_of_change = lambda row: 2 ** math.floor(math.log2(row))

    How many changes already occurred

    Next we need to find out how many changes have occurred based on the column number. We can see that A and B always change in the following pattern:

    0 0 -> 1 0 -> 1 1 -> 2 1 -> 2 2 -> 3 2 -> 3 3 -> ... -> x+1 x -> x+1 x+1 -> ...

    So based on rate of change we can count how many changes have already occurred given a row and a column.

    Taking a look at the stall positioning table again, for any row number, on which column does the row start? We see that the row starts at the column number equal to the row number. E.g. row 4 starts at 4th column, row 9 starts at 9th column. So if we check how many rate_of_change columns fit in this space - that will tell us how many changes have already occurred.

    E.g. for row=3, column=6, the rate of change is 2 ** math.floor(math.log2(3)) == 2. Row 3 starts at column 3, and we’re interested in column 6. 6-3 leaves a space of 3 for changes to occur in. If we divide that by rate_of_change = 2 => (6 - 3) / 2 = 1.5. So a full one change has already occurred, going 0 0 -> 1 0. If you check the table again, we see that this is exactly right.

    So the derived formula seems to be number_of_changes_done = lambda column, row: math.floor((column - row) / rate_of_change(row))

    Putting it all together: calculating A and B

    Now we can calculate A and B.

    If we take a look at the first line, we see that A number seems to be equal to about half of the number of changes for a specific column. E.g. on row=1 column=3, the result we are aiming for is A=1, B=1, and a total of 2 changes have already occurred. Or on row=1 column=10, the result we are aiming for is A=5, B=4, and a total of 9 changes have already occurred.

    And B tends to also be about half of the number of changes for a specific column, but always either equal to A, or one less than A.

    So my idea was to use ceil and floor, setting A = math.ceil(number_of_changes_done / 2), and B = number_of_changes_done - A, and this seems to work.

    E.g. row=1 column=3: number_of_changes_done = 2, A = math.ceil(2 / 2) => math.ceil(1) => 1, B = 2 - 1 => B = 1. Or row=6, column=14: number_of_changes_done = 2, A = math.ceil(2 / 2) => math.ceil(1) => 1, B = 2 - 1 => B = 1. Or row=2, column=11: number_of_changes_done = 4, A = math.ceil(4 / 2) => math.ceil(2) => 2, B = 4 - 2 => B = 2.

    The formulas seem to check out!

    Final code

    def get_change_rate_on_row(K):
        # Every how many records do values change on row K
        return 2 ** math.floor(math.log2(K))
    
    def number_of_changes_done(N, K):
        return math.floor((N - K) / get_change_rate_on_row(K))
    
    def solve(N, K):
        change_rate = get_change_rate_on_row(K)
    
        total_changes = number_of_changes_done(N, K)
    
        y = math.ceil(total_changes / 2)
        z = total_changes - y
    
        return [str(y), str(z)]
    
    def run():
        test_cases = read_int()
        for case in range(test_cases):
            case += 1
            [stalls, people] = read_ints()
    
            solution = solve(stalls, people)
    
            # print(case, solution)
            write_answer('Case #%d: %s' % (case, ' '.join(solution)))
    

    All this work for a straight forward calculation! But there we go, we have calculated values A and B for any given row and column in a fraction of a second and without using 0.02% of world’s data storage capacity.

    Final thoughts

    The final result ended up being:

    • Place: 661/4199
    • Score: 60
    • Problem 1: 2/2
    • Problem 2: 2/2
    • Problem 3: 2/2
    • Problem 4: 2/3. Last test with 1 <= N <= 10^18 failed with “Wrong answer”. Unfortunately data is not published, so I can’t check what happened, especially at that scale.

    But overall it’s been a fun brain teaser. The last problem was especially challenging, but that made it so much more satisfying to crack.

    The first problem seemed like a clear approach for binary search, but I was unable to think of any metaphors or similar problems for the other three. I managed to muddle through them, but seeing how quickly other contestands were able to breeze through these problems, that makes me think that there are other relatable problems or approaches one could use here. Have you solved these problems? What approach did you take? I’d be curious to learn more.

    And if you haven’t tried Google Code Jam or similar competitions before - give it a go some time. They can twist your brain unlike daily CRUD apps, making us more capable of solving more complex problems in the end.


  • Automating basic tasks in games with OpenCV and Python

    In multiplayer games bots have been popular in giving an edge to players on certain tasks, like farming in-game currency in World of Warcraft, Eve Online, level up bots for Runescape. As a technical challenge it can be as basic as a program to play Tic Tac Toe for you, or as complex as DeepMind and Blizzard using StarCraft II as their AI research environment.

    If you are interested in getting your feet wet with Computer Vision, game automation can be an interesting and fairly easy way to practice: contents are predictable, well known, new data can be easily generated, interactions with the environment can be computer controlled, experiments can be cheaply and quickly tested. Grabbing new data can be as easy as taking a screenshot, while mouse and keyboard can be controlled with multiple languages.

    In this case we will use the most basic strategy in CV - template matching. Of course, depends on the problem domain, but the technique can be surprisingly powerful. All it does is it slides the template image across the input image and compares differences. See Template matching docs for more visual explanation of how it works.

    As the test bed I’ve chosen a flash game called Burrito Bison. If you’re not familiar with it - check it out. The goal of the game is to throw yourself as far as possible, squishing gummy bears along the way, earning gold, buying upgrades and doing it all over again. Gameplay itself is split into multiple differently themed sections, separated by giant doors, which the player has to gain enough momentum to break through.

    It is a fairly straightforward game with basic controls and a few menu items. Drag the bison for it to jump off the ropes and start the game, left-click (anywhere) when possible to force smash gummy bears, buy upgrades to progress. No keyboard necessary, everything’s in 2D, happening on a single screen without the need for player to manually scroll. The somewhat annoying part is having to grind enough gold coins to buy the upgrades - it is tedious. Luckily it involves few actions and can be automated. Even if done suboptimally - quantity over quality, and the computer will take care of the tediousness.

    If you are interested in the code, you can find the full project on Github.

    Overview and approach

    Let’s first take a look at what needs to be automated. Some parts may be ignored if they don’t occur frequently, e.g. starting the game itself, some of the screens (e.g. item unlocked). But can be handled if desired.

    There are 3 stages of the game with multiple screens, multiple objects to click on. Thus the bot will sometimes need to look for certain indicators whether or not it’s looking at the right screen, or look for objects if it needs to interact with them. Because we are working with templates, any changes in dimensions, rotation or animations will make it more difficult to match the objects. Thus objects bot looks for have to be static.

    1. Starting the round

    The round is started by pushing the mouse button down on the bison character, dragging the character to aim and releasing it. If bison hit the “opponent” - a speed boost is provided.

    Launching the player

    There are several objects to think about:

    1. Bison itself. The bot will need to locate the bison character and drag it away to release it
    2. Whether or not the current screen is the round starting screen. Because the player character will occur elsewhere in the game, the bot may be confused and misbehave if we use the bison itself. A decent option could be to use ring’s corner posts or the “vs” symbol.

    Highlighted template images for launching the player

    2. Round in progress

    Once the round has started, the game plays mostly by itself, no need for bot interactions for the character to bounce around.

    Full rocket boost template

    To help gain more points though we can use the “Rocket” to smash more gummies. To determine when rocket boost is ready we can use the full rocket boost bar as a template. Left mouse click anywhere on screen will trigger it.

    3. Round ended

    Once the round ends there are be a few menu screens to dismiss (pinata ad screen, missions screen, final results screen), and the bot will need to click a button to restart the round.

    Pinata screen:

    • we can use “I’m filled with goodies” text as the template to determine if we’re on the pinata screen. Pinata animation itself moves, glows, which may make it difficult for bot to match to template, thus unsuitable.
    • “Cancel” button, so the bot can click it

    Pinata screen

    Mission screen:

    Simply match the “Tap to continue” to determine if we’re on this particular screen and left mouse click on it to continue.

    Tap to continue

    Round results screen:

    Here “Next” button is static and we can reliably expect it to be here based on game’s flow. The bot can match and click it.

    Level finished

    Implementation

    For vision we can use OpenCV, which has Python support and is the defacto library for computer vision. There’s plenty to choose from for controlling the mouse, but I found luck with Pynput.

    Controls

    As far as controls go, there are 2 basic actions bot needs to perform with the mouse: 1) Left click on a specific coordinate 3) Left click drag from point A to point B

    Let’s start with moving the mouse. First we create the base class:

    import time
    from pynput.mouse import Button, Controller as MouseController
    
    class Controller:
        def __init__(self):
            self.mouse = MouseController()
    

    The Pynput library allows setting mouse position via mouse.position = (5, 6), which we can use. I found that in some games changing mouse position in such jumpy way may cause issues with events not triggering correctly, so instead I opted to linearly and smoothly move the mouse from point A to point B over a certain period:

        def move_mouse(self, x, y):
            def set_mouse_position(x, y):
                self.mouse.position = (int(x), int(y))
            def smooth_move_mouse(from_x, from_y, to_x, to_y, speed=0.2):
                steps = 40
                sleep_per_step = speed // steps
                x_delta = (to_x - from_x) / steps
                y_delta = (to_y - from_y) / steps
                for step in range(steps):
                    new_x = x_delta * (step + 1) + from_x
                    new_y = y_delta * (step + 1) + from_y
                    set_mouse_position(new_x, new_y)
                    time.sleep(sleep_per_step)
            return smooth_move_mouse(
                self.mouse.position[0],
                self.mouse.position[1],
                x,
                y
            )
    

    The number of steps used here is likely too high, considering the game should be capped at 60fps (or 16.6ms per frame). 40 steps in 200ms means a mouse position change every 5ms, perhaps redundant, but seems to work okay in this case.

    Left mouse click and dragging from point A to B can be implemented using it as follows:

        def left_mouse_click(self):
            self.mouse.click(Button.left)
    
        def left_mouse_drag(self, start, end):
            self.move_mouse(*start)
            time.sleep(0.2)
            self.mouse.press(Button.left)
            time.sleep(0.2)
            self.move_mouse(*end)
            time.sleep(0.2)
            self.mouse.release(Button.left)
            time.sleep(0.2)
    

    Sleeps in between mouse events help the game keep up with the changes. Depending on the framerate these sleep periods may be too long, but compared to humans they’re okay.

    Vision

    I found the vision part to be the most finicky and time consuming. It helps to save problematic screenshots and write tests against them to ensure objects get detected as expected. During bot’s runtime we’ll use MSS library to take screenshots and perform object detection on them with OpenCV.

    import cv2
    from mss import mss
    from PIL import Image
    import numpy as np
    import time
    
    class Vision:
        def __init__(self):
            self.static_templates = {
                'left-goalpost': 'assets/left-goalpost.png',
                'bison-head': 'assets/bison-head.png',
                'pineapple-head': 'assets/pineapple-head.png',
                'bison-health-bar': 'assets/bison-health-bar.png',
                'pineapple-health-bar': 'assets/pineapple-health-bar.png',
                'cancel-button': 'assets/cancel-button.png',
                'filled-with-goodies': 'assets/filled-with-goodies.png',
                'next-button': 'assets/next-button.png',
                'tap-to-continue': 'assets/tap-to-continue.png',
                'unlocked': 'assets/unlocked.png',
                'full-rocket': 'assets/full-rocket.png'
            }
    
            self.templates = { k: cv2.imread(v, 0) for (k, v) in self.static_templates.items() }
    
            self.monitor = {'top': 0, 'left': 0, 'width': 1920, 'height': 1080}
            self.screen = mss()
    
            self.frame = None
    

    First we start with the class. I cut out all of the template images for objects the bot will need to identify and stored them as png images.

    Images are read with cv2.imread(path, 0) method, where the zero argument will read those images as grayscale, which simplifies the search for OpenCV. As a matter of fact, the bot will only work with grayscale images. And since these template images will be used frequently, we can cache them on initialization.

    Configuration for MSS is hardcoded here, but can be changed or extracted into a constructor argument if we want to.

    Next we add a method to take screenshots with MSS and convert them into grayscale images in the form of Numpy arrays:

        def convert_rgb_to_bgr(self, img):
            return img[:, :, ::-1]
    
        def take_screenshot(self):
            sct_img = self.screen.grab(self.monitor)
            img = Image.frombytes('RGB', sct_img.size, sct_img.rgb)
            img = np.array(img)
            img = self.convert_rgb_to_bgr(img)
            img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
            return img_gray
    
        def refresh_frame(self):
            self.frame = self.take_screenshot()
    

    RGB to BGR color conversion is necessary, as while the MSS library will take screenshots using RGB colors, OpenCV uses BGR colors. Skipping the conversion will usually result in incorrect colors. And refresh_frame() will be used by our game class to instruct when to fetch a new screenshot. This is to avoid taking and processing screenshots with every template matching call, as it is an expensive operation after all.

    To match templates within screenshots we can use the built-in cv2.matchTemplate(image, template, method) method. It may return more matches that may not fully match, but those can be filtered out.

        def match_template(self, img_grayscale, template, threshold=0.9):
            """
            Matches template image in a target grayscaled image
            """
    
            res = cv2.matchTemplate(img_grayscale, template, cv2.TM_CCOEFF_NORMED)
            matches = np.where(res >= threshold)
            return matches
    

    You can find more about how different matching methods work on OpenCV documentation

    To simplify work with matching our problem domain’s templates we add a helper method which will use a template picture when given its name:

        def find_template(self, name, image=None, threshold=0.9):
            if image is None:
                if self.frame is None:
                    self.refresh_frame()
    
                image = self.frame
    
            return self.match_template(
                image,
                self.templates[name],
                threshold
            )
    

    And while the reason is not obvious yet, let’s add a different variation of this method which would try to match at least one of rescaled template images. As we’ll later see, at the start of the round camera’s perspective may change depending on size of the opponent, making some objects slightly smaller or larger than our template. Such bruteforce method of checking templates at different scales is expensive, but in this use case it seemed to work acceptably while allowing to continue using the simple technique of template matching.

        def scaled_find_template(self, name, image=None, threshold=0.9, scales=[1.0, 0.9, 1.1]):
            if image is None:
                if self.frame is None:
                    self.refresh_frame()
    
                image = self.frame
    
            initial_template = self.templates[name]
            for scale in scales:
                scaled_template = cv2.resize(initial_template, (0,0), fx=scale, fy=scale)
                matches = self.match_template(
                    image,
                    scaled_template,
                    threshold
                )
                if np.shape(matches)[1] >= 1:
                    return matches
            return matches
    
    

    Game logic

    There are several distinct states of the game:

    • not started/starting
    • started/in progress
    • finished (result screens)

    Since game linearly follows these states every time, we can use them to limit our Vision to check only for objects the bot can expect to find based on state it thinks the game is in. It starts with not started state:

    import numpy as np
    import time
    
    class Game:
    
        def __init__(self, vision, controller):
            self.vision = vision
            self.controller = controller
            self.state = 'not started'
    

    Next we add a few helper methods to check if object exists based on template, and to attempt to click on that object:

        def can_see_object(self, template, threshold=0.9):
            matches = self.vision.find_template(template, threshold=threshold)
            return np.shape(matches)[1] >= 1
    
        def click_object(self, template, offset=(0, 0)):
            matches = self.vision.find_template(template)
    
            x = matches[1][0] + offset[0]
            y = matches[0][0] + offset[1]
    
            self.controller.move_mouse(x, y)
            self.controller.left_mouse_click()
    
            time.sleep(0.5)
    

    Nothing fancy, the heavy lifting is done by OpenCV and Vision class. For object clicking offsets, they usually will be necessary as (x, y) coordinates will be for the top left corner of the matched template, which may not always be in the zone of object activation in-game. Of course, one could center the mouse on template’s center, but object-specific offsets work okay as well.

    Next let’s run over indicator objects, and objects the bot will need to click on:

    • Character’s name on the health bar (indicator whether the round is starting);
    • Left corner post of the ring (to launch the player). Tried using character’s head before, but there are multiple characters in rotation as game progresses;
    • “Filled with goodies!” text (indicator for pinata screen);
    • “Cancel” button (to exit pinata screen);
    • “Tap to continue” text (to skip “Missions” screen);
    • “Next” button (to restart round);
    • Rocket bar (indicator for when rocket boost can be launched);

    These actions and indicators can be implemented with these methods:

        def round_starting(self, player):
            return self.can_see_object('%s-health-bar' % player)
    
        def round_finished(self):
            return self.can_see_object('tap-to-continue')
    
        def click_to_continue(self):
            return self.click_object('tap-to-continue', offset=(50, 30))
    
        def can_start_round(self):
            return self.can_see_object('next-button')
    
        def start_round(self):
            return self.click_object('next-button', offset=(100, 30))
    
        def has_full_rocket(self):
            return self.can_see_object('full-rocket')
    
        def use_full_rocket(self):
            return self.click_object('full-rocket')
    
        def found_pinata(self):
            return self.can_see_object('filled-with-goodies')
    
        def click_cancel(self):
            return self.click_object('cancel-button')
    

    Launching the character is more involved, as it requires dragging the character sideways and releasing. In this case I’ve used the left corner post instead of the character itself because there are two characters in rotation (Bison and Pineapple).

        def launch_player(self):
            # Try multiple sizes of goalpost due to perspective changes for
            # different opponents
            scales = [1.2, 1.1, 1.05, 1.04, 1.03, 1.02, 1.01, 1.0, 0.99, 0.98, 0.97, 0.96, 0.95]
            matches = self.vision.scaled_find_template('left-goalpost', threshold=0.75, scales=scales)
            x = matches[1][0]
            y = matches[0][0]
    
            self.controller.left_mouse_drag(
                (x, y),
                (x-200, y+10)
            )
    
            time.sleep(0.5)
    

    This is where the bruteforce attempt comes in to detect different scales of templates. Previously I used the same template detection using self.vision.find_template(), but that seemingly randomly failed. What I ended up noticing is that depending on the size of the opponent affects camera’s perspective, e.g. first green bear is small and static, while brown bunny jumps up significantly. So for bigger opponents camera zoomed out, making the character smaller than the template, while on smaller opponents the character became larger. So such broad range of scales is used in an attempt to cover all character and opponent combinations.

    Lastly, game logic can be written as follows:

        def run(self):
            while True:
                self.vision.refresh_frame()
                if self.state == 'not started' and self.round_starting('bison'):
                    self.log('Round needs to be started, launching bison')
                    self.launch_player()
                    self.state = 'started'
                if self.state == 'not started' and self.round_starting('pineapple'):
                    self.log('Round needs to be started, launching pineapple')
                    self.launch_player()
                    self.state = 'started'
                elif self.state == 'started' and self.found_pinata():
                    self.log('Found a pinata, attempting to skip')
                    self.click_cancel()
                elif self.state == 'started' and self.round_finished():
                    self.log('Round finished, clicking to continue')
                    self.click_to_continue()
                    self.state = 'mission_finished'
                elif self.state == 'started' and self.has_full_rocket():
                    self.log('Round in progress, has full rocket, attempting to use it')
                    self.use_full_rocket()
                elif self.state == 'mission_finished' and self.can_start_round():
                    self.log('Mission finished, trying to restart round')
                    self.start_round()
                    self.state = 'not started'
                else:
                    self.log('Not doing anything')
                time.sleep(1)
    

    Fairly straightforward. Take a new screenshot about once a second, and based on it and internal game state perform specific actions we defined above. Here the self.log() method is just:

        def log(self, text):
            print('[%s] %s' % (time.strftime('%H:%M:%S'), text))
    

    End result

    And that’s all there’s to it. Basic actions and screens are handled and in the most common cases the bot should be able to handle itself fine. What may cause problems are the ad screens (e.g. Sales, promotions), unlocked items, card screens (completed missions), newly unlocked characters. But such screens are rare.

    You can find the full code for this bot on Github.

    While not terribly exciting, here is a sample bot’s gameplay video:

    All in all this was an interesting experiment. Not optimal, but good enough.

    Matching templates got us pretty far and the technique can still be used to automate remaining unhandled screens (e.g. unlocked items). However, matching templates works well only when perspective doesn’t change, and as we’ve seen from difficulties of launching the player - that is indeed an issue. Although OpenCV mentions Homography as one of more advanced ways to deal with perspective changes, for Burrito Bison a bruteforce approach was sufficient.

    Even if template matching is a basic technique, it can be pretty powerful in automating games. While you may not have much luck in writing the next Counter Strike bot, but as you’ve seen thus far interacting with simple 2D objects and interfaces can be done fairly easily.

    Happy gold farming!


  • Experimenting with TLA+ and PlusCal 3: Throttling multiple senders

    Last time covered a basic specification of a single-sender message throttling algorith, which is nice, but in a real-world scenario you will usually have multiple clients interacting with the system, which is the idea for this post. The goal is to adapt our previous message throttling spec for multiple senders and explore it.

    The idea

    In the previous spec the spec modelled having a single “message transmitter” and “message sender” as one entity, specified by a single program with a loop in it:

    (* --algorithm throttling
    \* ...
    begin
        Simulate:
        while Time < MaxTime do
            either
                SendMessage:
                    ThrottledSendMessage(MessageId);
            or
                TimeStep:
                    Time := Time + 1;
            end either;
        end while;
    end algorithm; *)
    

    However, if we introduce multiple senders, it may be meaningful to control the number of senders, so we could have 1 “message transmitter” and N “message senders”. PlusCal has the concept of processes, which may help model the system in this case: transmitter and senders could be modelled as individual processes. Doing it this way would also allow for exploration of PlusCal processes as well.

    So the idea here is to make the “Server” process just responsible for global time keeping. I don’t think there is a point in modelling inter-process message passing, or perhaps it’s an idea for future extension. “Client” processes would always attempt to send messages without waiting, TLC should take care of modelling distribution of messages in time.

    Spec

    Now that there are multiple clients and we throttle per-client, we will need to keep track of senders in SendMessage:

    macro SendMessage(Sender, Message)
    begin
        MessageLog := MessageLog \union {[Time |-> Time, Message |-> Message, Sender |-> Sender]};
    end macro;
    

    As mentioned above, the “Server” process can be responsible for just tracking time:

    process Server = 1
    begin
        Simulate:
        while Time < MaxTime do
            Time := Time + 1;
        end while;
    end process;
    

    So Server is defined as a PlusCal process that keeps incrementing the global Time variable. At the same time the clients can be defined as a process that just keeps attempting to send messages:

    process Client \in Clients
    variables MessageId = 0;
    begin
        Simulate:
        while Time < MaxTime do
            if SentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self, MessageId);
                MessageId := MessageId + 1;
            end if;
        end while;
    end process;
    

    process Client \in Clients creates a process for each value in the Clients set. So if Clients = {1, 2, 3}, that will create 3 processes for each of the values. And self references the value assigned to the process. E.g. self = 1, self = 2 or self = 3. Macro body of ThrottledSendMessage has been inlined, as MessageId has been turned into a process local variable.

    And lastly, the invariant needs to change to ensure that all clients have sent no more than the limited amount of messages in any window:

    FrequencyInvariant ==
        \A C \in Clients: \A T \in 0..MaxTime: SentMessages(C, GetTimeWindow(T)) <= Limit
    

    Full PlusCal code:

    ----------------------------- MODULE Throttling -----------------------------
    EXTENDS Naturals, FiniteSets
    CONSTANTS Window, Limit, MaxTime, Clients
    
    (* --algorithm throttling
    variables
        Time = 0,
        MessageLog = {};   
    
    define
        GetTimeWindow(T) ==
            {X \in Nat : X <= T /\ X >= (T - Window)}
            
        SentMessages(Sender, TimeWindow) ==
            Cardinality({Message \in MessageLog: Message.Time \in TimeWindow /\ Message.Sender = Sender})
    end define;   
     
    macro SendMessage(Sender, Message)
    begin
        MessageLog := MessageLog \union {[Time |-> Time, Message |-> Message, Sender |-> Sender]};
    end macro;
    
    process Server = 1
    begin
        Simulate:
        while Time < MaxTime do
            Time := Time + 1;
        end while;
    end process;
    
    process Client \in Clients
    variables MessageId = 0;
    begin
        Simulate:
        while Time < MaxTime do
            if SentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self, MessageId);
                MessageId := MessageId + 1;
            end if;
        end while;
    end process;
    
        
    end algorithm; *)
    
    FrequencyInvariant ==
        \A C \in Clients: \A T \in 0..MaxTime: SentMessages(C, GetTimeWindow(T)) <= Limit
    
    =============================================================================
    

    Let’s try with a small model first: 2 clients, window of 2 seconds, modelled for 5 total seconds, with a limit of 3 seconds. Multi-client Throttling: TLC model values Here client process names are defined as a set of model values. Model values are special values in TLA+ in that a value is only equal to itself and nothing else (C1 = C1, but C1 != C2, C1 != 5, C1 != FALSE, etc.), so it’s useful in avoiding mixing up with primitive types. Symmetry set can also be selected, as permutations in values of this set do not change the meaning: whether we have processes C1 and C2, or C2 and C1 - it doesn’t matter. Marking a set as a symmetry set, however, does allow TLC to reduce the search space and check specs faster.

    When run with these values TLC doesn’t report any issues Multi-client Throttling: TLC success

    Looking good! But so far the spec only checks if the number of messages sent is at or below the limit. But zero messages sent is also under the limit! So if SendMessage altered to drop all messages:

    macro SendMessage(Sender, Message)
    begin
        MessageLog := {};
    end macro;
    

    And the model is rerun - TLC will get stuck on incrementing message IDs, but it should still pass. Not great! This fits the idea mentioned in the Amazon Paper, in which they suggest TLA+ forces one to think about what needs to go right. So what this model is still missing is at least to ensure if message can be sent - it is actually sent, not partially or incorrectly throttled.

    But before going down that direction, there is a change to consider that would simplify the model and help resolve the infinite loop problem, thus helping demonstrate the message dropping issue more clearly.

    Simplification

    Hillel Wayne, author of the learntla.com, has recently helpfully suggested tracking number of messages sent using functions as opposed to tracking messages themselves, and counting them afterwards. It can be done because we don’t really care about message contents. Thanks for the tip!

    To do that, Messages can be redefined as such:

    variables
        Time = 0,
        SimulationTimes = 0..MaxTime,
        Messages = [C \in Clients |-> [T \in SimulationTimes |-> 0]];
    

    Each client will have a function, which returns another function holding the number of messages a Client has sent at a particular Time. Then addressing a specific time of a particular client can be as simple as Messages[Client][Time].

    The way time windows are calculated needs to change as well, as this time TLC was unable to use a finite integer set instead of Naturals set:

    Attempted to enumerate { x \in S : p(x) } when S:
    Nat
    is not enumerable
    While working on the initial state:
    /\ Time = 0
    /\ Messages = ( C1 :> (0 :> 0 @@ 1 :> 0 @@ 2 :> 0 @@ 3 :> 0 @@ 4 :> 0) @@
      C2 :> (0 :> 0 @@ 1 :> 0 @@ 2 :> 0 @@ 3 :> 0 @@ 4 :> 0) )
    /\ pc = (C1 :> "Simulate" @@ C2 :> "Simulate" @@ 1 :> "Simulate_")
    /\ SimulationTimes = 0..4
    

    Here’s a response by Leslie Lamport detailing the issue a bit more. Rewritten GetTimeWindow as such:

    GetTimeWindow(T) ==
        {X \in (T-Window)..T : X >= 0 /\ X <= T /\ X >= (T - Window)}
    

    To count the number of messages, summing with reduce can be convenient. For that I borrowed SetSum operator from the snippet Hillel recommended, ending up with:

    TotalSentMessages(Sender, TimeWindow) ==
        SetSum({Messages[Sender][T] : T \in TimeWindow})
    

    The rest of the changes are trivial, so here’s the code:

    ----------------------------- MODULE ThrottlingSimplified -----------------------------
    EXTENDS Naturals, FiniteSets, Sequences
    CONSTANTS Window, Limit, MaxTime, Clients
    
    (* --algorithm throttling
    variables
        Time = 0,
        SimulationTimes = 0..MaxTime,
        Messages = [C \in Clients |-> [T \in SimulationTimes |-> 0]];
    
    define
        GetTimeWindow(T) ==
            {X \in (T-Window)..T : X >= 0 /\ X <= T /\ X >= (T - Window)}
        
            
        Pick(Set) == CHOOSE s \in Set : TRUE
    
        RECURSIVE SetReduce(_, _, _)
        SetReduce(Op(_, _), set, value) == 
          IF set = {} THEN value
          ELSE 
            LET s == Pick(set)
            IN SetReduce(Op, set \ {s}, Op(s, value)) 
                
        SetSum(set) == 
          LET _op(a, b) == a + b
          IN SetReduce(_op, set, 0)
            
        TotalSentMessages(Sender, TimeWindow) ==
            SetSum({Messages[Sender][T] : T \in TimeWindow})
            
    end define;   
     
    macro SendMessage(Sender)
    begin
        Messages[Sender][Time] := Messages[Sender][Time] + 1;
    end macro;
    
    process Server = 1
    begin
        Simulate:
        while Time < MaxTime do
            Time := Time + 1;
        end while;
    end process;
    
    process Client \in Clients
    begin
        Simulate:
        while Time < MaxTime do
            if TotalSentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self);
            end if;
        end while;
    end process;
    
        
    end algorithm; *)
    
    FrequencyInvariant ==
        \A C \in Clients: \A T \in SimulationTimes: TotalSentMessages(C, GetTimeWindow(T)) <= Limit
    

    And if we run the model, it passes: Simplified Multi-client Throttling: model passes But if + 1 is removed from the SendMessage macro, effectively quietly dropping all messages - the model still passes. Gah!

    Ensuring messages do get sent

    The idea to fix this issue is simple - track message sending attempts. The invariant would then check if enough messages have been sent.

    To do so, a new variable MessagesAttempted is added:

    MessagesAttempted = [C \in Clients |-> [T \in SimulationTimes |-> 0]];
    

    And an operator to count the total number of attempts made during a window:

    TotalAttemptedMessages(Sender, TimeWindow) ==
        SetSum({MessagesAttempted[Sender][T] : T \in TimeWindow})
    

    Macro to mark the sending attempt:

    macro MarkSendingAttempt(Sender)
    begin
        MessagesAttempted[Sender][Time] := MessagesAttempted[Sender][Time] + 1;
    end macro;
    

    And updated the client process to mark sending attempts:

    process Client \in Clients
    begin
        Simulate:
        while Time < MaxTime do
            if TotalSentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self);
                MarkSendingAttempt(self);
            end if;
        end while;
    end process;
    

    As for the invariant, there are two relevant cases: a) When there were fewer sending attempts than limit permits, all of them should be successfully accepted; b) When number of attempts is larger than the limit, number of messages successfully accepted should be at whatever Limit is set. Which could be written as such:

    PermittedMessagesAcceptedInvariant ==
        \A C \in Clients:
            \A T \in SimulationTimes:
                \/ TotalAttemptedMessages(C, GetTimeWindow(T)) = TotalSentMessages(C, GetTimeWindow(T))
                \/ TotalAttemptedMessages(C, GetTimeWindow(T)) = Limit
    

    Here’s the full code:

    ----------------------------- MODULE ThrottlingSimplified -----------------------------
    EXTENDS Naturals, FiniteSets, Sequences
    CONSTANTS Window, Limit, MaxTime, Clients
    
    (* --algorithm throttling
    variables
        Time = 0,
        SimulationTimes = 0..MaxTime,
        Messages = [C \in Clients |-> [T \in SimulationTimes |-> 0]],
        MessagesAttempted = [C \in Clients |-> [T \in SimulationTimes |-> 0]];
    
    define
        GetTimeWindow(T) ==
            {X \in (T-Window)..T : X >= 0 /\ X <= T /\ X >= (T - Window)}
        
            
        Pick(Set) == CHOOSE s \in Set : TRUE
    
        RECURSIVE SetReduce(_, _, _)
        SetReduce(Op(_, _), set, value) == 
          IF set = {} THEN value
          ELSE 
            LET s == Pick(set)
            IN SetReduce(Op, set \ {s}, Op(s, value)) 
                
        SetSum(set) == 
          LET _op(a, b) == a + b
          IN SetReduce(_op, set, 0)
            
        TotalSentMessages(Sender, TimeWindow) ==
            SetSum({Messages[Sender][T] : T \in TimeWindow})
            
        TotalAttemptedMessages(Sender, TimeWindow) ==
            SetSum({MessagesAttempted[Sender][T] : T \in TimeWindow})
            
    end define;   
     
    macro SendMessage(Sender)
    begin
        Messages[Sender][Time] := Messages[Sender][Time] + 1;
    end macro;
    
    macro MarkSendingAttempt(Sender)
    begin
        MessagesAttempted[Sender][Time] := MessagesAttempted[Sender][Time] + 1;
    end macro;
    
    process Server = 1
    begin
        Simulate:
        while Time < MaxTime do
            Time := Time + 1;
        end while;
    end process;
    
    process Client \in Clients
    begin
        Simulate:
        while Time < MaxTime do
            if TotalSentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self);
                MarkSendingAttempt(self);
            end if;
        end while;
    end process;
    
        
    end algorithm; *)
    
    FrequencyInvariant ==
        \A C \in Clients: \A T \in SimulationTimes: TotalSentMessages(C, GetTimeWindow(T)) <= Limit
        
    PermittedMessagesAcceptedInvariant ==
        \A C \in Clients:
            \A T \in SimulationTimes:
                \/ TotalAttemptedMessages(C, GetTimeWindow(T)) = TotalSentMessages(C, GetTimeWindow(T))
                \/ TotalAttemptedMessages(C, GetTimeWindow(T)) = Limit
    

    If messages are successfully accepted, the model passes. However, if SendMessage is purposefully broken again by commenting out the incrementation, the model fails:

    Invariant PermittedMessagesAcceptedInvariant is violated.
    

    Simplified Multi-client Throttling: permitted message invariant violated Which is great, as this is exactly what we wanted to achieve.


  • Experimenting with TLA+ and PlusCal 2: Throttling

    Last time we briefly looked over resources for learning TLA+ and PlusCal, as well as wrote a basic spec to prove equivalence of a few logic formulas. In this post I thought it would be interesting to write a spec for a more common scenario.

    The idea

    This is inspired by the issue we had with MailPoet 2. An attacker was able to successfully continuously sign up users, the users would get sent a subscription confirmation email with each subscription, that way some email addresses were effectively bombed with emails. Let’s try to model and explore this situation.

    In the actual system we have the client, server, emails being sent out, network connections… Lots of details, most of them we can probably ignore safely and still model the system sufficiently.

    Sending messages without any rate-limiting

    Let’s start with the initial situation: we want to allow TLC to send arbitrary “Messages” (actual content, metadata is irrelevant here) over time, and we want to put some limit on how many messages can be sent out over some period of time.

    For a start we’ll model just a single sender, storing messages in a set as a log, noting their contents and time when they were sent. Contents would be irrelevant, but in this case it helps humans interpret the situation. We’ll allow TLC to keep sending an arbitrary number of messages one by one, and then increment the time. That way we allow our model to “send 60 messages in one second”.

    Finally, we add an invariant to make sure that the number of messages sent in one second does not go over the limit. Here’s the code:

    ----------------------------- MODULE Throttling -----------------------------
    EXTENDS Naturals, FiniteSets
    CONSTANTS Window, Limit, MaxTime
    
    (* --algorithm throttling
    variables
        Time = 0,
        MessageId = 0,
        MessageLog = {};
    
    macro SendMessage(Message)
    begin
        MessageLog := MessageLog \union {[Time |-> Time, Message |-> Message]};
    end macro;
    
    begin
        Simulate:
        while Time < MaxTime do
            either
                SendMessage:
                    SendMessage(MessageId);
                    MessageId := MessageId + 1;
            or
                TimeStep:
                    Time := Time + 1;
            end either;
        end while;
    end algorithm; *)
    
    FrequencyInvariant ==
        \A T \in 0..MaxTime: (Cardinality({Message \in MessageLog: Message["Time"] = T}) <= Limit)
    

    The individual log in MessageLog is a record with 2 keys: Time and Message, and the MessageLog itself is a set. The FrequencyInvariant invariant checks that during every second of our simulation the number of messages sent does not exceed our Limit.

    We’ll use these constant values for our initial runs:

    Window <- 3
    MaxTime <- 5
    Limit <- 3
    

    If we translate the PlusCal code to TLA+ (with Ctrl+T in Toolbox) and run the model with TLC, as we expected - it quickly finds an error:

    TLC throttling error: message limit exceeded

    Since we did not perform any throttling and instead allowed TLC to “send” an arbitrary number of messages - TLC sent 4 messages before the invariant failed.

    Time based throttling

    As the next step let’s make use of the Window constant and alter SendMessage() macro to accept messages only if doing so does not exceed our Limit.

    First of all, given a Time, we want to grab a set of times falling within our window. So if the current Time = 5, and our Window = 2, we want it to return {3, 4, 5}:

    define
      GetTimeWindow(T) ==
          {X \in Nat : X <= T /\ X >= (T - Window)}
    \* ...
    

    Next, we want to define a way to count the number of messages sent over a set of specific times. So if our time window is {3, 4, 5, 6}, we count the total number of messages sent at Time = 3, Time = 4, …, Time = 6:

    \* ...
        SentMessages(TimeWindow) ==
                Cardinality({Message \in MessageLog: Message.Time \in TimeWindow})
    end define;
    

    Then we can write a ThrottledSendMessage variant of the SendMessage macro:

    macro ThrottledSendMessage(Message)
    begin
        if SentMessages(GetTimeWindow(Time)) < Limit then
            SendMessage(Message);
        end if;
    end macro;
    

    Here we silently drop the overflow messages, “sending” only what’s within the limit. At this point this is the code we have:

    ----------------------------- MODULE Throttling -----------------------------
    EXTENDS Naturals, FiniteSets
    CONSTANTS Window, Limit, MaxTime
    
    (* --algorithm throttling
    variables
        Time = 0,
        MessageId = 0,
        MessageLog = {};   
    
    define
        GetTimeWindow(T) ==
            {X \in Nat : X <= T /\ X >= (T - Window)}
            
        SentMessages(TimeWindow) ==
            Cardinality({Message \in MessageLog: Message.Time \in TimeWindow})
    end define;   
     
    macro SendMessage(Message)
    begin
        MessageLog := MessageLog \union {[Time |-> Time, Message |-> Message]};
    end macro;
    
    macro ThrottledSendMessage(Message)
    begin   
        if SentMessages(GetTimeWindow(Time)) < Limit then
            SendMessage(Message);
        end if;
    end macro;
         
    begin
        Simulate:
        while Time < MaxTime do
            either
                SendMessage:
                    ThrottledSendMessage(MessageId);
                    MessageId := MessageId + 1;
            or
                TimeStep:
                    Time := Time + 1;
            end either;
        end while;    
    end algorithm; *)
    
    FrequencyInvariant ==
        \A T \in 0..MaxTime: (Cardinality({Message \in MessageLog: Message["Time"] = T}) <= Limit)
    
    =============================================================================
    

    At this point we can run our model:

    TLC throttling: checking model does not finish

    And hmm… After 10-20 minutes of checking TLC does not finish, the number of distinct states keeps going up. I even tried reducing our constants to Window <- 2, MaxTime <- 3, Limit <- 3. As TLC started climbing over 2GB of memory used I cancelled the process. Now what?

    Assuming that the process could never terminate, what could cause that? We tried to limit our “simulation” to MaxTime <- 3, time keeps moving on, number of messages “sent” is capped by Limit. But looking back at the SendMessage label, ThrottledSendMessage() gets called and it’s result isn’t considered. But regardless of whether or not a message has been sent, MessageId always keeps going up, and each such new state could mean TLC needs to keep going. So that’s the hypothesis.

    Let’s try capping that by moving MessageId := MessageId + 1; from SendMessage label to ThrottledSendMessage macro, and perform it only if the message is successfully sent. Now if we rerun the model:

    TLC throttling: model checking finishes

    We can see that TLC has found no errors, TLC found 142 distinct states and the number of messages sent at any single Time value hasn’t exceeded Limit <- 3. Great!

    Next let’s strengthen the FrequencyInvariant. Right now it checks whether the number of messages sent at ONE SPECIFIC VALUE OF Time does not exceed the Limit. But we want to ensure that the same holds for the whole Window of time values:

    FrequencyInvariant ==
        \A T \in 0..MaxTime: SentMessages(GetTimeWindow(T)) <= Limit
    

    And because the spec already had implemented time-window based rate limiting - the number of states checked stayed the same: TLC throttling: model checking finishes

    At this point running TLC with smaller model sizes helped build confidence in the model. But will it hold if we crank up constants to say… Window <- 5, MaxTime <- 30, Limit <- 10? Unfortunately in an hour of running the model the number of distinct states and queue size just kept going up at a consistent pace, so after it consumed over 1gb of memory it had to be terminated. TLC throttling: model checking did not finish

    I think Limit being high creates quite a few states, as you can spread out 10 messages over 5 distinct times in 5^10 ways. Spread that out over 30-5+1=26 possible windows, bringing us into hundreds of millions of possible states. The number of states quickly goes up even in this fairly simple case! It took my laptop ~10 minutes to explore ~13 million states reached with Window <- 5, MaxTime <- 15, Limit < 5 configuration:

    TLC throttling: larger model checking success

    For the sake of the example that’s probably sufficient. I don’t see any point in checking larger models in this case, but on a faster computer, or on a distributed set of servers it could be checked much faster. Yes - TLC does work in a distributed mode, which could be an interesting feature to play around in itself.


  • Experimenting with formal methods: TLA+ and PlusCal

    When you hear about formal methods in the context of software development, what do you picture in your mind? I would imagine a time consuming and possibly tedious process used in building mission-critical software, e.g. for a pacemaker or a space probe.

    It may be something briefly visited during Computer Science courses at a college or university, quickly running through a few trivial examples. I have never actually had a chance to look into them before, so after reading the “loudly” titled “The Coming Software Apocalypse” in The Atlantic (interesting article by the way), it seemed intriguing.

    Furthermore, interest has been elevated furthermore by the fact that Amazon Web Services team has had good success in using formal methods in their day to day work and they are continuing to push its adoption. (Use of Formal Methods at Amazon Web Services)

    So the goal here is to learn enough about formal methods to be able to use them in at least one case on my own, whether at work or on a pet project. And just because there are a few books about TLA+ - that will be the focus of this experiment.

    Why?

    TLA+ is built to model algorithms and systems, to make sure they work as expected and catch any bugs lurking in them. It can be very helpful in finding subtle bugs in non-trivial algorithms or systems we create. When you write a specification, TLA+ Toolbox comes with a TLC Model checker that will check your specification and try to find issues in it. E.g. in one of the examples a bug TLC found took about 30 steps to reproduce. It also provides tools to explore certain properties of your system in question, e.g. what the system is allowed to do, or what the system eventually needs to do. This improves our confidence in systems we build.

    With TLA+ we are working with a model of a system, which can work with and against us. With a model we can iterate faster, glossing over irrelevant details, exploring our ideas before we build out the final systems. But because we work with a model, we have to take great care in describing the model as to not gloss over critical behavior-altering details.

    The other benefit we get is better understanding of systems we build, how they work and how they may fail. In order to write a spec it forces you to think about what needs to go right. To quote the AWS paper above, “we have found this rigorous “what needs to go right?” approach to be significantly less error prone than the ad hoc “what might go wrong?” approach.”

    Learning about TLA+

    To start with, Leslie Lamport produces a TLA+ Video Course. They will very clearly explain how TLA+ works, how to set up the Toolbox and get it running, run and write a few specifications as well, including parts of widely used algorithms, such as Paxos Commit (video version, paper).

    Once up and running, mr. Lamport’s “Specifying systems” book provides an even more detailed look of TLA+. The book is comprehensive and can be a tough read, however it comprises of multiple parts and the first part “contains all that most engineers need to know about writing specifications” (83 pages).

    And if you have checked out any of the specifications shared in the book and videos above, you may find that the way specifications are written is slightly different compared to the day-to-day programming languages one may use. Luckily, the Amazon paper mentions PlusCal, which is an accompanying language to TLA+ and feels more like a regular C-style programming language. They also report some engineers being more comfortable and productive with PlusCal, but still requiring extra flexibility provided by TLA+. You can learn more about PlusCal in Hillel Wayne’s online book Learning TLA.

    Exploring tautologies

    A tautology is defined as “[…] a formula which is “always true” — that is, it is true for every assignment of truth values to its simple components. You can think of a tautology as a rule of logic.” (ref). And in formal logic there are formulas which are equivalent, e.g. A -> B (A implies B) is equivalent to ~A \/ B (not A or B). So as a very basic starting point we can prove that with TLA+.

    ---------------------------- MODULE Tautologies ----------------------------
    VARIABLES P, Q
    
    F1(A, B) == A => B
    F2(A, B) == ~A \/ B
    
    FormulasEquivalent == F1(P, Q) <=> F2(P, Q)
    
    Init ==
        /\ P \in BOOLEAN
        /\ Q \in BOOLEAN
    
    Next ==
        /\ P' \in BOOLEAN
        /\ Q' \in BOOLEAN
    
    Spec ==
        Init /\ [][Next]_<<P, Q>>
    =============================================================================
    

    The idea here is fairly simple: Let TLC pick arbitrary set of 2 boolean values, and we make sure that whatever those values are - results of F1 and F2 are always equivalent.

    We declare 2 variables in our spec: P and Q. In the Init method we declare that both variables have some value from BOOLEAN set (which is equal to {TRUE, FALSE}). We don’t have to specify which particular values - TLC will pick them for us. Next is our loop step to mutate the values of P and Q. P' designates what the value of P will be after Next is executed. Again - we allow it to be any value from BOOLEAN set. This is important, because if we didn’t specify how exactly to mutate the value (or not change it) - TLC is free to pick an arbitrary value for it: NULL, "somestring", set {2, 5, 13}, etc. So up to this point we only allowed TLC to pick values of P and Q. They could be P = FALSE, Q = FALSE, P = FALSE, Q = TRUE, P = TRUE, Q = FALSE, P = TRUE, Q = TRUE.

    We check our condition by configuring TLC to check if our invariant FormulasEquivalent holds. An invariant is a property that has to hold as the data mutates. In our case it’s “whatever P and Q values are, results of F1(P, Q) and F2(P, Q) should be equivalent”.

    TLC model configuration for Tautologies Successful TLC results for Tautologies

    Once we run TLC, it reports that there were 4 distinct states (as we expected), and that there are no errors - our model worked as we expected.

    Now, if we intentionally make a mistake and declare F2 as F2(A, B) == A \/ B by replacing ~A with A (removing negation), we can rerun the model and see what TLC finds:

    Errors in Tautologies found by TLC

    We see that TLA found two states where our invariant does not hold:

    Invariant FormulasEquivalent is violated by the initial state:
    /\ P = FALSE
    /\ Q = FALSE
    
    Invariant FormulasEquivalent is violated by the initial state:
    /\ P = TRUE
    /\ Q = FALSE
    

    Which is exactly what we expected. Similarly we can check equivalence for ~(A /\ B) <=> (~A \/ ~B), P <=> (P \/ P), (P => Q) <=> (~Q => ~P), etc. Useful? Not very, but it gets us writing and checking some basic specs.

    Closing thoughts

    After running through the tutorial videos, first part of the “Specifying systems” book and “Learn TLA+” book I thought I had the grasp of it, but the air quickly ran out once I turned to attempting write specifications without external guidance. Perhaps specifying a sorting algorithm or solving a magic square was too much to quick and more practice is needed with simpler specifications. Finding an interesting yet achievable algorithm to specify is proving to be a challenge. So this part stops here with only the first trivial spec for proving equivalent logic formulas, and in the next part it may be interesting to try specifying a basic rate limiting algorithm.

    In the meantime, running through TLA+ examples on Github helped better understand the types of algorithms TLA+ can specify and the way more complex specifications are written.