• Automating basic tasks in games with OpenCV and Python

    In multiplayer games bots have been popular in giving an edge to players on certain tasks, like farming in-game currency in World of Warcraft, Eve Online, level up bots for Runescape. As a technical challenge it can be as basic as a program to play Tic Tac Toe for you, or as complex as DeepMind and Blizzard using StarCraft II as their AI research environment.

    If you are interested in getting your feet wet with Computer Vision, game automation can be an interesting and fairly easy way to practice: contents are predictable, well known, new data can be easily generated, interactions with the environment can be computer controlled, experiments can be cheaply and quickly tested. Grabbing new data can be as easy as taking a screenshot, while mouse and keyboard can be controlled with multiple languages.

    In this case we will use the most basic strategy in CV - template matching. Of course, depends on the problem domain, but the technique can be surprisingly powerful. All it does is it slides the template image across the input image and compares differences. See Template matching docs for more visual explanation of how it works.

    As the test bed I’ve chosen a flash game called Burrito Bison. If you’re not familiar with it - check it out. The goal of the game is to throw yourself as far as possible, squishing gummy bears along the way, earning gold, buying upgrades and doing it all over again. Gameplay itself is split into multiple differently themed sections, separated by giant doors, which the player has to gain enough momentum to break through.

    It is a fairly straightforward game with basic controls and a few menu items. Drag the bison for it to jump off the ropes and start the game, left-click (anywhere) when possible to force smash gummy bears, buy upgrades to progress. No keyboard necessary, everything’s in 2D, happening on a single screen without the need for player to manually scroll. The somewhat annoying part is having to grind enough gold coins to buy the upgrades - it is tedious. Luckily it involves few actions and can be automated. Even if done suboptimally - quantity over quality, and the computer will take care of the tediousness.

    If you are interested in the code, you can find the full project on Github.

    Overview and approach

    Let’s first take a look at what needs to be automated. Some parts may be ignored if they don’t occur frequently, e.g. starting the game itself, some of the screens (e.g. item unlocked). But can be handled if desired.

    There are 3 stages of the game with multiple screens, multiple objects to click on. Thus the bot will sometimes need to look for certain indicators whether or not it’s looking at the right screen, or look for objects if it needs to interact with them. Because we are working with templates, any changes in dimensions, rotation or animations will make it more difficult to match the objects. Thus objects bot looks for have to be static.

    1. Starting the round

    The round is started by pushing the mouse button down on the bison character, dragging the character to aim and releasing it. If bison hit the “opponent” - a speed boost is provided.

    Launching the player

    There are several objects to think about:

    1. Bison itself. The bot will need to locate the bison character and drag it away to release it
    2. Whether or not the current screen is the round starting screen. Because the player character will occur elsewhere in the game, the bot may be confused and misbehave if we use the bison itself. A decent option could be to use ring’s corner posts or the “vs” symbol.

    Highlighted template images for launching the player

    2. Round in progress

    Once the round has started, the game plays mostly by itself, no need for bot interactions for the character to bounce around.

    Full rocket boost template

    To help gain more points though we can use the “Rocket” to smash more gummies. To determine when rocket boost is ready we can use the full rocket boost bar as a template. Left mouse click anywhere on screen will trigger it.

    3. Round ended

    Once the round ends there are be a few menu screens to dismiss (pinata ad screen, missions screen, final results screen), and the bot will need to click a button to restart the round.

    Pinata screen:

    • we can use “I’m filled with goodies” text as the template to determine if we’re on the pinata screen. Pinata animation itself moves, glows, which may make it difficult for bot to match to template, thus unsuitable.
    • “Cancel” button, so the bot can click it

    Pinata screen

    Mission screen:

    Simply match the “Tap to continue” to determine if we’re on this particular screen and left mouse click on it to continue.

    Tap to continue

    Round results screen:

    Here “Next” button is static and we can reliably expect it to be here based on game’s flow. The bot can match and click it.

    Level finished

    Implementation

    For vision we can use OpenCV, which has Python support and is the defacto library for computer vision. There’s plenty to choose from for controlling the mouse, but I found luck with Pynput.

    Controls

    As far as controls go, there are 2 basic actions bot needs to perform with the mouse: 1) Left click on a specific coordinate 3) Left click drag from point A to point B

    Let’s start with moving the mouse. First we create the base class:

    import time
    from pynput.mouse import Button, Controller as MouseController
    
    class Controller:
        def __init__(self):
            self.mouse = MouseController()
    

    The Pynput library allows setting mouse position via mouse.position = (5, 6), which we can use. I found that in some games changing mouse position in such jumpy way may cause issues with events not triggering correctly, so instead I opted to linearly and smoothly move the mouse from point A to point B over a certain period:

        def move_mouse(self, x, y):
            def set_mouse_position(x, y):
                self.mouse.position = (int(x), int(y))
            def smooth_move_mouse(from_x, from_y, to_x, to_y, speed=0.2):
                steps = 40
                sleep_per_step = speed // steps
                x_delta = (to_x - from_x) / steps
                y_delta = (to_y - from_y) / steps
                for step in range(steps):
                    new_x = x_delta * (step + 1) + from_x
                    new_y = y_delta * (step + 1) + from_y
                    set_mouse_position(new_x, new_y)
                    time.sleep(sleep_per_step)
            return smooth_move_mouse(
                self.mouse.position[0],
                self.mouse.position[1],
                x,
                y
            )
    

    The number of steps used here is likely too high, considering the game should be capped at 60fps (or 16.6ms per frame). 40 steps in 200ms means a mouse position change every 5ms, perhaps redundant, but seems to work okay in this case.

    Left mouse click and dragging from point A to B can be implemented using it as follows:

        def left_mouse_click(self):
            self.mouse.click(Button.left)
    
        def left_mouse_drag(self, start, end):
            self.move_mouse(*start)
            time.sleep(0.2)
            self.mouse.press(Button.left)
            time.sleep(0.2)
            self.move_mouse(*end)
            time.sleep(0.2)
            self.mouse.release(Button.left)
            time.sleep(0.2)
    

    Sleeps in between mouse events help the game keep up with the changes. Depending on the framerate these sleep periods may be too long, but compared to humans they’re okay.

    Vision

    I found the vision part to be the most finicky and time consuming. It helps to save problematic screenshots and write tests against them to ensure objects get detected as expected. During bot’s runtime we’ll use MSS library to take screenshots and perform object detection on them with OpenCV.

    import cv2
    from mss import mss
    from PIL import Image
    import numpy as np
    import time
    
    class Vision:
        def __init__(self):
            self.static_templates = {
                'left-goalpost': 'assets/left-goalpost.png',
                'bison-head': 'assets/bison-head.png',
                'pineapple-head': 'assets/pineapple-head.png',
                'bison-health-bar': 'assets/bison-health-bar.png',
                'pineapple-health-bar': 'assets/pineapple-health-bar.png',
                'cancel-button': 'assets/cancel-button.png',
                'filled-with-goodies': 'assets/filled-with-goodies.png',
                'next-button': 'assets/next-button.png',
                'tap-to-continue': 'assets/tap-to-continue.png',
                'unlocked': 'assets/unlocked.png',
                'full-rocket': 'assets/full-rocket.png'
            }
    
            self.templates = { k: cv2.imread(v, 0) for (k, v) in self.static_templates.items() }
    
            self.monitor = {'top': 0, 'left': 0, 'width': 1920, 'height': 1080}
            self.screen = mss()
    
            self.frame = None
    

    First we start with the class. I cut out all of the template images for objects the bot will need to identify and stored them as png images.

    Images are read with cv2.imread(path, 0) method, where the zero argument will read those images as grayscale, which simplifies the search for OpenCV. As a matter of fact, the bot will only work with grayscale images. And since these template images will be used frequently, we can cache them on initialization.

    Configuration for MSS is hardcoded here, but can be changed or extracted into a constructor argument if we want to.

    Next we add a method to take screenshots with MSS and convert them into grayscale images in the form of Numpy arrays:

        def convert_rgb_to_bgr(self, img):
            return img[:, :, ::-1]
    
        def take_screenshot(self):
            sct_img = self.screen.grab(self.monitor)
            img = Image.frombytes('RGB', sct_img.size, sct_img.rgb)
            img = np.array(img)
            img = self.convert_rgb_to_bgr(img)
            img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
            return img_gray
    
        def refresh_frame(self):
            self.frame = self.take_screenshot()
    

    RGB to BGR color conversion is necessary, as while the MSS library will take screenshots using RGB colors, OpenCV uses BGR colors. Skipping the conversion will usually result in incorrect colors. And refresh_frame() will be used by our game class to instruct when to fetch a new screenshot. This is to avoid taking and processing screenshots with every template matching call, as it is an expensive operation after all.

    To match templates within screenshots we can use the built-in cv2.matchTemplate(image, template, method) method. It may return more matches that may not fully match, but those can be filtered out.

        def match_template(self, img_grayscale, template, threshold=0.9):
            """
            Matches template image in a target grayscaled image
            """
    
            res = cv2.matchTemplate(img_grayscale, template, cv2.TM_CCOEFF_NORMED)
            matches = np.where(res >= threshold)
            return matches
    

    You can find more about how different matching methods work on OpenCV documentation

    To simplify work with matching our problem domain’s templates we add a helper method which will use a template picture when given its name:

        def find_template(self, name, image=None, threshold=0.9):
            if image is None:
                if self.frame is None:
                    self.refresh_frame()
    
                image = self.frame
    
            return self.match_template(
                image,
                self.templates[name],
                threshold
            )
    

    And while the reason is not obvious yet, let’s add a different variation of this method which would try to match at least one of rescaled template images. As we’ll later see, at the start of the round camera’s perspective may change depending on size of the opponent, making some objects slightly smaller or larger than our template. Such bruteforce method of checking templates at different scales is expensive, but in this use case it seemed to work acceptably while allowing to continue using the simple technique of template matching.

        def scaled_find_template(self, name, image=None, threshold=0.9, scales=[1.0, 0.9, 1.1]):
            if image is None:
                if self.frame is None:
                    self.refresh_frame()
    
                image = self.frame
    
            initial_template = self.templates[name]
            for scale in scales:
                scaled_template = cv2.resize(initial_template, (0,0), fx=scale, fy=scale)
                matches = self.match_template(
                    image,
                    scaled_template,
                    threshold
                )
                if np.shape(matches)[1] >= 1:
                    return matches
            return matches
    
    

    Game logic

    There are several distinct states of the game:

    • not started/starting
    • started/in progress
    • finished (result screens)

    Since game linearly follows these states every time, we can use them to limit our Vision to check only for objects the bot can expect to find based on state it thinks the game is in. It starts with not started state:

    import numpy as np
    import time
    
    class Game:
    
        def __init__(self, vision, controller):
            self.vision = vision
            self.controller = controller
            self.state = 'not started'
    

    Next we add a few helper methods to check if object exists based on template, and to attempt to click on that object:

        def can_see_object(self, template, threshold=0.9):
            matches = self.vision.find_template(template, threshold=threshold)
            return np.shape(matches)[1] >= 1
    
        def click_object(self, template, offset=(0, 0)):
            matches = self.vision.find_template(template)
    
            x = matches[1][0] + offset[0]
            y = matches[0][0] + offset[1]
    
            self.controller.move_mouse(x, y)
            self.controller.left_mouse_click()
    
            time.sleep(0.5)
    

    Nothing fancy, the heavy lifting is done by OpenCV and Vision class. For object clicking offsets, they usually will be necessary as (x, y) coordinates will be for the top left corner of the matched template, which may not always be in the zone of object activation in-game. Of course, one could center the mouse on template’s center, but object-specific offsets work okay as well.

    Next let’s run over indicator objects, and objects the bot will need to click on:

    • Character’s name on the health bar (indicator whether the round is starting);
    • Left corner post of the ring (to launch the player). Tried using character’s head before, but there are multiple characters in rotation as game progresses;
    • “Filled with goodies!” text (indicator for pinata screen);
    • “Cancel” button (to exit pinata screen);
    • “Tap to continue” text (to skip “Missions” screen);
    • “Next” button (to restart round);
    • Rocket bar (indicator for when rocket boost can be launched);

    These actions and indicators can be implemented with these methods:

        def round_starting(self, player):
            return self.can_see_object('%s-health-bar' % player)
    
        def round_finished(self):
            return self.can_see_object('tap-to-continue')
    
        def click_to_continue(self):
            return self.click_object('tap-to-continue', offset=(50, 30))
    
        def can_start_round(self):
            return self.can_see_object('next-button')
    
        def start_round(self):
            return self.click_object('next-button', offset=(100, 30))
    
        def has_full_rocket(self):
            return self.can_see_object('full-rocket')
    
        def use_full_rocket(self):
            return self.click_object('full-rocket')
    
        def found_pinata(self):
            return self.can_see_object('filled-with-goodies')
    
        def click_cancel(self):
            return self.click_object('cancel-button')
    

    Launching the character is more involved, as it requires dragging the character sideways and releasing. In this case I’ve used the left corner post instead of the character itself because there are two characters in rotation (Bison and Pineapple).

        def launch_player(self):
            # Try multiple sizes of goalpost due to perspective changes for
            # different opponents
            scales = [1.2, 1.1, 1.05, 1.04, 1.03, 1.02, 1.01, 1.0, 0.99, 0.98, 0.97, 0.96, 0.95]
            matches = self.vision.scaled_find_template('left-goalpost', threshold=0.75, scales=scales)
            x = matches[1][0]
            y = matches[0][0]
    
            self.controller.left_mouse_drag(
                (x, y),
                (x-200, y+10)
            )
    
            time.sleep(0.5)
    

    This is where the bruteforce attempt comes in to detect different scales of templates. Previously I used the same template detection using self.vision.find_template(), but that seemingly randomly failed. What I ended up noticing is that depending on the size of the opponent affects camera’s perspective, e.g. first green bear is small and static, while brown bunny jumps up significantly. So for bigger opponents camera zoomed out, making the character smaller than the template, while on smaller opponents the character became larger. So such broad range of scales is used in an attempt to cover all character and opponent combinations.

    Lastly, game logic can be written as follows:

        def run(self):
            while True:
                self.vision.refresh_frame()
                if self.state == 'not started' and self.round_starting('bison'):
                    self.log('Round needs to be started, launching bison')
                    self.launch_player()
                    self.state = 'started'
                if self.state == 'not started' and self.round_starting('pineapple'):
                    self.log('Round needs to be started, launching pineapple')
                    self.launch_player()
                    self.state = 'started'
                elif self.state == 'started' and self.found_pinata():
                    self.log('Found a pinata, attempting to skip')
                    self.click_cancel()
                elif self.state == 'started' and self.round_finished():
                    self.log('Round finished, clicking to continue')
                    self.click_to_continue()
                    self.state = 'mission_finished'
                elif self.state == 'started' and self.has_full_rocket():
                    self.log('Round in progress, has full rocket, attempting to use it')
                    self.use_full_rocket()
                elif self.state == 'mission_finished' and self.can_start_round():
                    self.log('Mission finished, trying to restart round')
                    self.start_round()
                    self.state = 'not started'
                else:
                    self.log('Not doing anything')
                time.sleep(1)
    

    Fairly straightforward. Take a new screenshot about once a second, and based on it and internal game state perform specific actions we defined above. Here the self.log() method is just:

        def log(self, text):
            print('[%s] %s' % (time.strftime('%H:%M:%S'), text))
    

    End result

    And that’s all there’s to it. Basic actions and screens are handled and in the most common cases the bot should be able to handle itself fine. What may cause problems are the ad screens (e.g. Sales, promotions), unlocked items, card screens (completed missions), newly unlocked characters. But such screens are rare.

    You can find the full code for this bot on Github.

    While not terribly exciting, here is a sample bot’s gameplay video:

    All in all this was an interesting experiment. Not optimal, but good enough.

    Matching templates got us pretty far and the technique can still be used to automate remaining unhandled screens (e.g. unlocked items). However, matching templates works well only when perspective doesn’t change, and as we’ve seen from difficulties of launching the player - that is indeed an issue. Although OpenCV mentions Homography as one of more advanced ways to deal with perspective changes, for Burrito Bison a bruteforce approach was sufficient.

    Even if template matching is a basic technique, it can be pretty powerful in automating games. While you may not have much luck in writing the next Counter Strike bot, but as you’ve seen thus far interacting with simple 2D objects and interfaces can be done fairly easily.

    Happy gold farming!


  • Experimenting with TLA+ and PlusCal 3: Throttling multiple senders

    Last time covered a basic specification of a single-sender message throttling algorith, which is nice, but in a real-world scenario you will usually have multiple clients interacting with the system, which is the idea for this post. The goal is to adapt our previous message throttling spec for multiple senders and explore it.

    The idea

    In the previous spec the spec modelled having a single “message transmitter” and “message sender” as one entity, specified by a single program with a loop in it:

    (* --algorithm throttling
    \* ...
    begin
        Simulate:
        while Time < MaxTime do
            either
                SendMessage:
                    ThrottledSendMessage(MessageId);
            or
                TimeStep:
                    Time := Time + 1;
            end either;
        end while;
    end algorithm; *)
    

    However, if we introduce multiple senders, it may be meaningful to control the number of senders, so we could have 1 “message transmitter” and N “message senders”. PlusCal has the concept of processes, which may help model the system in this case: transmitter and senders could be modelled as individual processes. Doing it this way would also allow for exploration of PlusCal processes as well.

    So the idea here is to make the “Server” process just responsible for global time keeping. I don’t think there is a point in modelling inter-process message passing, or perhaps it’s an idea for future extension. “Client” processes would always attempt to send messages without waiting, TLC should take care of modelling distribution of messages in time.

    Spec

    Now that there are multiple clients and we throttle per-client, we will need to keep track of senders in SendMessage:

    macro SendMessage(Sender, Message)
    begin
        MessageLog := MessageLog \union {[Time |-> Time, Message |-> Message, Sender |-> Sender]};
    end macro;
    

    As mentioned above, the “Server” process can be responsible for just tracking time:

    process Server = 1
    begin
        Simulate:
        while Time < MaxTime do
            Time := Time + 1;
        end while;
    end process;
    

    So Server is defined as a PlusCal process that keeps incrementing the global Time variable. At the same time the clients can be defined as a process that just keeps attempting to send messages:

    process Client \in Clients
    variables MessageId = 0;
    begin
        Simulate:
        while Time < MaxTime do
            if SentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self, MessageId);
                MessageId := MessageId + 1;
            end if;
        end while;
    end process;
    

    process Client \in Clients creates a process for each value in the Clients set. So if Clients = {1, 2, 3}, that will create 3 processes for each of the values. And self references the value assigned to the process. E.g. self = 1, self = 2 or self = 3. Macro body of ThrottledSendMessage has been inlined, as MessageId has been turned into a process local variable.

    And lastly, the invariant needs to change to ensure that all clients have sent no more than the limited amount of messages in any window:

    FrequencyInvariant ==
        \A C \in Clients: \A T \in 0..MaxTime: SentMessages(C, GetTimeWindow(T)) <= Limit
    

    Full PlusCal code:

    ----------------------------- MODULE Throttling -----------------------------
    EXTENDS Naturals, FiniteSets
    CONSTANTS Window, Limit, MaxTime, Clients
    
    (* --algorithm throttling
    variables
        Time = 0,
        MessageLog = {};   
    
    define
        GetTimeWindow(T) ==
            {X \in Nat : X <= T /\ X >= (T - Window)}
            
        SentMessages(Sender, TimeWindow) ==
            Cardinality({Message \in MessageLog: Message.Time \in TimeWindow /\ Message.Sender = Sender})
    end define;   
     
    macro SendMessage(Sender, Message)
    begin
        MessageLog := MessageLog \union {[Time |-> Time, Message |-> Message, Sender |-> Sender]};
    end macro;
    
    process Server = 1
    begin
        Simulate:
        while Time < MaxTime do
            Time := Time + 1;
        end while;
    end process;
    
    process Client \in Clients
    variables MessageId = 0;
    begin
        Simulate:
        while Time < MaxTime do
            if SentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self, MessageId);
                MessageId := MessageId + 1;
            end if;
        end while;
    end process;
    
        
    end algorithm; *)
    
    FrequencyInvariant ==
        \A C \in Clients: \A T \in 0..MaxTime: SentMessages(C, GetTimeWindow(T)) <= Limit
    
    =============================================================================
    

    Let’s try with a small model first: 2 clients, window of 2 seconds, modelled for 5 total seconds, with a limit of 3 seconds. Multi-client Throttling: TLC model values Here client process names are defined as a set of model values. Model values are special values in TLA+ in that a value is only equal to itself and nothing else (C1 = C1, but C1 != C2, C1 != 5, C1 != FALSE, etc.), so it’s useful in avoiding mixing up with primitive types. Symmetry set can also be selected, as permutations in values of this set do not change the meaning: whether we have processes C1 and C2, or C2 and C1 - it doesn’t matter. Marking a set as a symmetry set, however, does allow TLC to reduce the search space and check specs faster.

    When run with these values TLC doesn’t report any issues Multi-client Throttling: TLC success

    Looking good! But so far the spec only checks if the number of messages sent is at or below the limit. But zero messages sent is also under the limit! So if SendMessage altered to drop all messages:

    macro SendMessage(Sender, Message)
    begin
        MessageLog := {};
    end macro;
    

    And the model is rerun - TLC will get stuck on incrementing message IDs, but it should still pass. Not great! This fits the idea mentioned in the Amazon Paper, in which they suggest TLA+ forces one to think about what needs to go right. So what this model is still missing is at least to ensure if message can be sent - it is actually sent, not partially or incorrectly throttled.

    But before going down that direction, there is a change to consider that would simplify the model and help resolve the infinite loop problem, thus helping demonstrate the message dropping issue more clearly.

    Simplification

    Hillel Wayne, author of the learntla.com, has recently helpfully suggested tracking number of messages sent using functions as opposed to tracking messages themselves, and counting them afterwards. It can be done because we don’t really care about message contents. Thanks for the tip!

    To do that, Messages can be redefined as such:

    variables
        Time = 0,
        SimulationTimes = 0..MaxTime,
        Messages = [C \in Clients |-> [T \in SimulationTimes |-> 0]];
    

    Each client will have a function, which returns another function holding the number of messages a Client has sent at a particular Time. Then addressing a specific time of a particular client can be as simple as Messages[Client][Time].

    The way time windows are calculated needs to change as well, as this time TLC was unable to use a finite integer set instead of Naturals set:

    Attempted to enumerate { x \in S : p(x) } when S:
    Nat
    is not enumerable
    While working on the initial state:
    /\ Time = 0
    /\ Messages = ( C1 :> (0 :> 0 @@ 1 :> 0 @@ 2 :> 0 @@ 3 :> 0 @@ 4 :> 0) @@
      C2 :> (0 :> 0 @@ 1 :> 0 @@ 2 :> 0 @@ 3 :> 0 @@ 4 :> 0) )
    /\ pc = (C1 :> "Simulate" @@ C2 :> "Simulate" @@ 1 :> "Simulate_")
    /\ SimulationTimes = 0..4
    

    Here’s a response by Leslie Lamport detailing the issue a bit more. Rewritten GetTimeWindow as such:

    GetTimeWindow(T) ==
        {X \in (T-Window)..T : X >= 0 /\ X <= T /\ X >= (T - Window)}
    

    To count the number of messages, summing with reduce can be convenient. For that I borrowed SetSum operator from the snippet Hillel recommended, ending up with:

    TotalSentMessages(Sender, TimeWindow) ==
        SetSum({Messages[Sender][T] : T \in TimeWindow})
    

    The rest of the changes are trivial, so here’s the code:

    ----------------------------- MODULE ThrottlingSimplified -----------------------------
    EXTENDS Naturals, FiniteSets, Sequences
    CONSTANTS Window, Limit, MaxTime, Clients
    
    (* --algorithm throttling
    variables
        Time = 0,
        SimulationTimes = 0..MaxTime,
        Messages = [C \in Clients |-> [T \in SimulationTimes |-> 0]];
    
    define
        GetTimeWindow(T) ==
            {X \in (T-Window)..T : X >= 0 /\ X <= T /\ X >= (T - Window)}
        
            
        Pick(Set) == CHOOSE s \in Set : TRUE
    
        RECURSIVE SetReduce(_, _, _)
        SetReduce(Op(_, _), set, value) == 
          IF set = {} THEN value
          ELSE 
            LET s == Pick(set)
            IN SetReduce(Op, set \ {s}, Op(s, value)) 
                
        SetSum(set) == 
          LET _op(a, b) == a + b
          IN SetReduce(_op, set, 0)
            
        TotalSentMessages(Sender, TimeWindow) ==
            SetSum({Messages[Sender][T] : T \in TimeWindow})
            
    end define;   
     
    macro SendMessage(Sender)
    begin
        Messages[Sender][Time] := Messages[Sender][Time] + 1;
    end macro;
    
    process Server = 1
    begin
        Simulate:
        while Time < MaxTime do
            Time := Time + 1;
        end while;
    end process;
    
    process Client \in Clients
    begin
        Simulate:
        while Time < MaxTime do
            if TotalSentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self);
            end if;
        end while;
    end process;
    
        
    end algorithm; *)
    
    FrequencyInvariant ==
        \A C \in Clients: \A T \in SimulationTimes: TotalSentMessages(C, GetTimeWindow(T)) <= Limit
    

    And if we run the model, it passes: Simplified Multi-client Throttling: model passes But if + 1 is removed from the SendMessage macro, effectively quietly dropping all messages - the model still passes. Gah!

    Ensuring messages do get sent

    The idea to fix this issue is simple - track message sending attempts. The invariant would then check if enough messages have been sent.

    To do so, a new variable MessagesAttempted is added:

    MessagesAttempted = [C \in Clients |-> [T \in SimulationTimes |-> 0]];
    

    And an operator to count the total number of attempts made during a window:

    TotalAttemptedMessages(Sender, TimeWindow) ==
        SetSum({MessagesAttempted[Sender][T] : T \in TimeWindow})
    

    Macro to mark the sending attempt:

    macro MarkSendingAttempt(Sender)
    begin
        MessagesAttempted[Sender][Time] := MessagesAttempted[Sender][Time] + 1;
    end macro;
    

    And updated the client process to mark sending attempts:

    process Client \in Clients
    begin
        Simulate:
        while Time < MaxTime do
            if TotalSentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self);
                MarkSendingAttempt(self);
            end if;
        end while;
    end process;
    

    As for the invariant, there are two relevant cases: a) When there were fewer sending attempts than limit permits, all of them should be successfully accepted; b) When number of attempts is larger than the limit, number of messages successfully accepted should be at whatever Limit is set. Which could be written as such:

    PermittedMessagesAcceptedInvariant ==
        \A C \in Clients:
            \A T \in SimulationTimes:
                \/ TotalAttemptedMessages(C, GetTimeWindow(T)) = TotalSentMessages(C, GetTimeWindow(T))
                \/ TotalAttemptedMessages(C, GetTimeWindow(T)) = Limit
    

    Here’s the full code:

    ----------------------------- MODULE ThrottlingSimplified -----------------------------
    EXTENDS Naturals, FiniteSets, Sequences
    CONSTANTS Window, Limit, MaxTime, Clients
    
    (* --algorithm throttling
    variables
        Time = 0,
        SimulationTimes = 0..MaxTime,
        Messages = [C \in Clients |-> [T \in SimulationTimes |-> 0]],
        MessagesAttempted = [C \in Clients |-> [T \in SimulationTimes |-> 0]];
    
    define
        GetTimeWindow(T) ==
            {X \in (T-Window)..T : X >= 0 /\ X <= T /\ X >= (T - Window)}
        
            
        Pick(Set) == CHOOSE s \in Set : TRUE
    
        RECURSIVE SetReduce(_, _, _)
        SetReduce(Op(_, _), set, value) == 
          IF set = {} THEN value
          ELSE 
            LET s == Pick(set)
            IN SetReduce(Op, set \ {s}, Op(s, value)) 
                
        SetSum(set) == 
          LET _op(a, b) == a + b
          IN SetReduce(_op, set, 0)
            
        TotalSentMessages(Sender, TimeWindow) ==
            SetSum({Messages[Sender][T] : T \in TimeWindow})
            
        TotalAttemptedMessages(Sender, TimeWindow) ==
            SetSum({MessagesAttempted[Sender][T] : T \in TimeWindow})
            
    end define;   
     
    macro SendMessage(Sender)
    begin
        Messages[Sender][Time] := Messages[Sender][Time] + 1;
    end macro;
    
    macro MarkSendingAttempt(Sender)
    begin
        MessagesAttempted[Sender][Time] := MessagesAttempted[Sender][Time] + 1;
    end macro;
    
    process Server = 1
    begin
        Simulate:
        while Time < MaxTime do
            Time := Time + 1;
        end while;
    end process;
    
    process Client \in Clients
    begin
        Simulate:
        while Time < MaxTime do
            if TotalSentMessages(self, GetTimeWindow(Time)) < Limit then
                SendMessage(self);
                MarkSendingAttempt(self);
            end if;
        end while;
    end process;
    
        
    end algorithm; *)
    
    FrequencyInvariant ==
        \A C \in Clients: \A T \in SimulationTimes: TotalSentMessages(C, GetTimeWindow(T)) <= Limit
        
    PermittedMessagesAcceptedInvariant ==
        \A C \in Clients:
            \A T \in SimulationTimes:
                \/ TotalAttemptedMessages(C, GetTimeWindow(T)) = TotalSentMessages(C, GetTimeWindow(T))
                \/ TotalAttemptedMessages(C, GetTimeWindow(T)) = Limit
    

    If messages are successfully accepted, the model passes. However, if SendMessage is purposefully broken again by commenting out the incrementation, the model fails:

    Invariant PermittedMessagesAcceptedInvariant is violated.
    

    Simplified Multi-client Throttling: permitted message invariant violated Which is great, as this is exactly what we wanted to achieve.


  • Experimenting with TLA+ and PlusCal 2: Throttling

    Last time we briefly looked over resources for learning TLA+ and PlusCal, as well as wrote a basic spec to prove equivalence of a few logic formulas. In this post I thought it would be interesting to write a spec for a more common scenario.

    The idea

    This is inspired by the issue we had with MailPoet 2. An attacker was able to successfully continuously sign up users, the users would get sent a subscription confirmation email with each subscription, that way some email addresses were effectively bombed with emails. Let’s try to model and explore this situation.

    In the actual system we have the client, server, emails being sent out, network connections… Lots of details, most of them we can probably ignore safely and still model the system sufficiently.

    Sending messages without any rate-limiting

    Let’s start with the initial situation: we want to allow TLC to send arbitrary “Messages” (actual content, metadata is irrelevant here) over time, and we want to put some limit on how many messages can be sent out over some period of time.

    For a start we’ll model just a single sender, storing messages in a set as a log, noting their contents and time when they were sent. Contents would be irrelevant, but in this case it helps humans interpret the situation. We’ll allow TLC to keep sending an arbitrary number of messages one by one, and then increment the time. That way we allow our model to “send 60 messages in one second”.

    Finally, we add an invariant to make sure that the number of messages sent in one second does not go over the limit. Here’s the code:

    ----------------------------- MODULE Throttling -----------------------------
    EXTENDS Naturals, FiniteSets
    CONSTANTS Window, Limit, MaxTime
    
    (* --algorithm throttling
    variables
        Time = 0,
        MessageId = 0,
        MessageLog = {};
    
    macro SendMessage(Message)
    begin
        MessageLog := MessageLog \union {[Time |-> Time, Message |-> Message]};
    end macro;
    
    begin
        Simulate:
        while Time < MaxTime do
            either
                SendMessage:
                    SendMessage(MessageId);
                    MessageId := MessageId + 1;
            or
                TimeStep:
                    Time := Time + 1;
            end either;
        end while;
    end algorithm; *)
    
    FrequencyInvariant ==
        \A T \in 0..MaxTime: (Cardinality({Message \in MessageLog: Message["Time"] = T}) <= Limit)
    

    The individual log in MessageLog is a record with 2 keys: Time and Message, and the MessageLog itself is a set. The FrequencyInvariant invariant checks that during every second of our simulation the number of messages sent does not exceed our Limit.

    We’ll use these constant values for our initial runs:

    Window <- 3
    MaxTime <- 5
    Limit <- 3
    

    If we translate the PlusCal code to TLA+ (with Ctrl+T in Toolbox) and run the model with TLC, as we expected - it quickly finds an error:

    TLC throttling error: message limit exceeded

    Since we did not perform any throttling and instead allowed TLC to “send” an arbitrary number of messages - TLC sent 4 messages before the invariant failed.

    Time based throttling

    As the next step let’s make use of the Window constant and alter SendMessage() macro to accept messages only if doing so does not exceed our Limit.

    First of all, given a Time, we want to grab a set of times falling within our window. So if the current Time = 5, and our Window = 2, we want it to return {3, 4, 5}:

    define
      GetTimeWindow(T) ==
          {X \in Nat : X <= T /\ X >= (T - Window)}
    \* ...
    

    Next, we want to define a way to count the number of messages sent over a set of specific times. So if our time window is {3, 4, 5, 6}, we count the total number of messages sent at Time = 3, Time = 4, …, Time = 6:

    \* ...
        SentMessages(TimeWindow) ==
                Cardinality({Message \in MessageLog: Message.Time \in TimeWindow})
    end define;
    

    Then we can write a ThrottledSendMessage variant of the SendMessage macro:

    macro ThrottledSendMessage(Message)
    begin
        if SentMessages(GetTimeWindow(Time)) < Limit then
            SendMessage(Message);
        end if;
    end macro;
    

    Here we silently drop the overflow messages, “sending” only what’s within the limit. At this point this is the code we have:

    ----------------------------- MODULE Throttling -----------------------------
    EXTENDS Naturals, FiniteSets
    CONSTANTS Window, Limit, MaxTime
    
    (* --algorithm throttling
    variables
        Time = 0,
        MessageId = 0,
        MessageLog = {};   
    
    define
        GetTimeWindow(T) ==
            {X \in Nat : X <= T /\ X >= (T - Window)}
            
        SentMessages(TimeWindow) ==
            Cardinality({Message \in MessageLog: Message.Time \in TimeWindow})
    end define;   
     
    macro SendMessage(Message)
    begin
        MessageLog := MessageLog \union {[Time |-> Time, Message |-> Message]};
    end macro;
    
    macro ThrottledSendMessage(Message)
    begin   
        if SentMessages(GetTimeWindow(Time)) < Limit then
            SendMessage(Message);
        end if;
    end macro;
         
    begin
        Simulate:
        while Time < MaxTime do
            either
                SendMessage:
                    ThrottledSendMessage(MessageId);
                    MessageId := MessageId + 1;
            or
                TimeStep:
                    Time := Time + 1;
            end either;
        end while;    
    end algorithm; *)
    
    FrequencyInvariant ==
        \A T \in 0..MaxTime: (Cardinality({Message \in MessageLog: Message["Time"] = T}) <= Limit)
    
    =============================================================================
    

    At this point we can run our model:

    TLC throttling: checking model does not finish

    And hmm… After 10-20 minutes of checking TLC does not finish, the number of distinct states keeps going up. I even tried reducing our constants to Window <- 2, MaxTime <- 3, Limit <- 3. As TLC started climbing over 2GB of memory used I cancelled the process. Now what?

    Assuming that the process could never terminate, what could cause that? We tried to limit our “simulation” to MaxTime <- 3, time keeps moving on, number of messages “sent” is capped by Limit. But looking back at the SendMessage label, ThrottledSendMessage() gets called and it’s result isn’t considered. But regardless of whether or not a message has been sent, MessageId always keeps going up, and each such new state could mean TLC needs to keep going. So that’s the hypothesis.

    Let’s try capping that by moving MessageId := MessageId + 1; from SendMessage label to ThrottledSendMessage macro, and perform it only if the message is successfully sent. Now if we rerun the model:

    TLC throttling: model checking finishes

    We can see that TLC has found no errors, TLC found 142 distinct states and the number of messages sent at any single Time value hasn’t exceeded Limit <- 3. Great!

    Next let’s strengthen the FrequencyInvariant. Right now it checks whether the number of messages sent at ONE SPECIFIC VALUE OF Time does not exceed the Limit. But we want to ensure that the same holds for the whole Window of time values:

    FrequencyInvariant ==
        \A T \in 0..MaxTime: SentMessages(GetTimeWindow(T)) <= Limit
    

    And because the spec already had implemented time-window based rate limiting - the number of states checked stayed the same: TLC throttling: model checking finishes

    At this point running TLC with smaller model sizes helped build confidence in the model. But will it hold if we crank up constants to say… Window <- 5, MaxTime <- 30, Limit <- 10? Unfortunately in an hour of running the model the number of distinct states and queue size just kept going up at a consistent pace, so after it consumed over 1gb of memory it had to be terminated. TLC throttling: model checking did not finish

    I think Limit being high creates quite a few states, as you can spread out 10 messages over 5 distinct times in 5^10 ways. Spread that out over 30-5+1=26 possible windows, bringing us into hundreds of millions of possible states. The number of states quickly goes up even in this fairly simple case! It took my laptop ~10 minutes to explore ~13 million states reached with Window <- 5, MaxTime <- 15, Limit < 5 configuration:

    TLC throttling: larger model checking success

    For the sake of the example that’s probably sufficient. I don’t see any point in checking larger models in this case, but on a faster computer, or on a distributed set of servers it could be checked much faster. Yes - TLC does work in a distributed mode, which could be an interesting feature to play around in itself.


  • Experimenting with formal methods: TLA+ and PlusCal

    When you hear about formal methods in the context of software development, what do you picture in your mind? I would imagine a time consuming and possibly tedious process used in building mission-critical software, e.g. for a pacemaker or a space probe.

    It may be something briefly visited during Computer Science courses at a college or university, quickly running through a few trivial examples. I have never actually had a chance to look into them before, so after reading the “loudly” titled “The Coming Software Apocalypse” in The Atlantic (interesting article by the way), it seemed intriguing.

    Furthermore, interest has been elevated furthermore by the fact that Amazon Web Services team has had good success in using formal methods in their day to day work and they are continuing to push its adoption. (Use of Formal Methods at Amazon Web Services)

    So the goal here is to learn enough about formal methods to be able to use them in at least one case on my own, whether at work or on a pet project. And just because there are a few books about TLA+ - that will be the focus of this experiment.

    Why?

    TLA+ is built to model algorithms and systems, to make sure they work as expected and catch any bugs lurking in them. It can be very helpful in finding subtle bugs in non-trivial algorithms or systems we create. When you write a specification, TLA+ Toolbox comes with a TLC Model checker that will check your specification and try to find issues in it. E.g. in one of the examples a bug TLC found took about 30 steps to reproduce. It also provides tools to explore certain properties of your system in question, e.g. what the system is allowed to do, or what the system eventually needs to do. This improves our confidence in systems we build.

    With TLA+ we are working with a model of a system, which can work with and against us. With a model we can iterate faster, glossing over irrelevant details, exploring our ideas before we build out the final systems. But because we work with a model, we have to take great care in describing the model as to not gloss over critical behavior-altering details.

    The other benefit we get is better understanding of systems we build, how they work and how they may fail. In order to write a spec it forces you to think about what needs to go right. To quote the AWS paper above, “we have found this rigorous “what needs to go right?” approach to be significantly less error prone than the ad hoc “what might go wrong?” approach.”

    Learning about TLA+

    To start with, Leslie Lamport produces a TLA+ Video Course. They will very clearly explain how TLA+ works, how to set up the Toolbox and get it running, run and write a few specifications as well, including parts of widely used algorithms, such as Paxos Commit (video version, paper).

    Once up and running, mr. Lamport’s “Specifying systems” book provides an even more detailed look of TLA+. The book is comprehensive and can be a tough read, however it comprises of multiple parts and the first part “contains all that most engineers need to know about writing specifications” (83 pages).

    And if you have checked out any of the specifications shared in the book and videos above, you may find that the way specifications are written is slightly different compared to the day-to-day programming languages one may use. Luckily, the Amazon paper mentions PlusCal, which is an accompanying language to TLA+ and feels more like a regular C-style programming language. They also report some engineers being more comfortable and productive with PlusCal, but still requiring extra flexibility provided by TLA+. You can learn more about PlusCal in Hillel Wayne’s online book Learning TLA.

    Exploring tautologies

    A tautology is defined as “[…] a formula which is “always true” — that is, it is true for every assignment of truth values to its simple components. You can think of a tautology as a rule of logic.” (ref). And in formal logic there are formulas which are equivalent, e.g. A -> B (A implies B) is equivalent to ~A \/ B (not A or B). So as a very basic starting point we can prove that with TLA+.

    ---------------------------- MODULE Tautologies ----------------------------
    VARIABLES P, Q
    
    F1(A, B) == A => B
    F2(A, B) == ~A \/ B
    
    FormulasEquivalent == F1(P, Q) <=> F2(P, Q)
    
    Init ==
        /\ P \in BOOLEAN
        /\ Q \in BOOLEAN
    
    Next ==
        /\ P' \in BOOLEAN
        /\ Q' \in BOOLEAN
    
    Spec ==
        Init /\ [][Next]_<<P, Q>>
    =============================================================================
    

    The idea here is fairly simple: Let TLC pick arbitrary set of 2 boolean values, and we make sure that whatever those values are - results of F1 and F2 are always equivalent.

    We declare 2 variables in our spec: P and Q. In the Init method we declare that both variables have some value from BOOLEAN set (which is equal to {TRUE, FALSE}). We don’t have to specify which particular values - TLC will pick them for us. Next is our loop step to mutate the values of P and Q. P' designates what the value of P will be after Next is executed. Again - we allow it to be any value from BOOLEAN set. This is important, because if we didn’t specify how exactly to mutate the value (or not change it) - TLC is free to pick an arbitrary value for it: NULL, "somestring", set {2, 5, 13}, etc. So up to this point we only allowed TLC to pick values of P and Q. They could be P = FALSE, Q = FALSE, P = FALSE, Q = TRUE, P = TRUE, Q = FALSE, P = TRUE, Q = TRUE.

    We check our condition by configuring TLC to check if our invariant FormulasEquivalent holds. An invariant is a property that has to hold as the data mutates. In our case it’s “whatever P and Q values are, results of F1(P, Q) and F2(P, Q) should be equivalent”.

    TLC model configuration for Tautologies Successful TLC results for Tautologies

    Once we run TLC, it reports that there were 4 distinct states (as we expected), and that there are no errors - our model worked as we expected.

    Now, if we intentionally make a mistake and declare F2 as F2(A, B) == A \/ B by replacing ~A with A (removing negation), we can rerun the model and see what TLC finds:

    Errors in Tautologies found by TLC

    We see that TLA found two states where our invariant does not hold:

    Invariant FormulasEquivalent is violated by the initial state:
    /\ P = FALSE
    /\ Q = FALSE
    
    Invariant FormulasEquivalent is violated by the initial state:
    /\ P = TRUE
    /\ Q = FALSE
    

    Which is exactly what we expected. Similarly we can check equivalence for ~(A /\ B) <=> (~A \/ ~B), P <=> (P \/ P), (P => Q) <=> (~Q => ~P), etc. Useful? Not very, but it gets us writing and checking some basic specs.

    Closing thoughts

    After running through the tutorial videos, first part of the “Specifying systems” book and “Learn TLA+” book I thought I had the grasp of it, but the air quickly ran out once I turned to attempting write specifications without external guidance. Perhaps specifying a sorting algorithm or solving a magic square was too much to quick and more practice is needed with simpler specifications. Finding an interesting yet achievable algorithm to specify is proving to be a challenge. So this part stops here with only the first trivial spec for proving equivalent logic formulas, and in the next part it may be interesting to try specifying a basic rate limiting algorithm.

    In the meantime, running through TLA+ examples on Github helped better understand the types of algorithms TLA+ can specify and the way more complex specifications are written.


  • Plotting WordPress downloads with Pandas and Matplotlib

    Recently I wanted to figure out a bit more about how popular WordPress plugins are used. Helpfully, WordPress plugin repository does provide a JSON API endpoint for fetching download statistics of any plugin in the plugin repository, e.g.:

    https://api.wordpress.org/stats/plugin/1.0/downloads.php?slug=akismet&limit=365
    

    It is used for plotting the stats on plugin pages, and it works for my purpose as well.

    There are two things I wanted to know from these stats: 1) How many downloads happen on average each day of the week; 2) How many total downloads happen in a week

    First I started by manually importing the data into LibreOffice Calc and calculating the moving weekly sums and moving 7d averages myself (for each daily downloads cell sum over last 7 ones, and average over last 7 ones).

    Akismet downloads

    For a quick plot it worked, but was not very convenient to update if you would like to take a look at multiple plugins every so often. So this was my excuse to play around with Pandas and Matplotlib to build something a bit more convenient in Python.

    Setup

    To begin we’ll use several libraries:

    • Requests to download the JSON stats;
    • Pandas for data manipulation and to run our extra calculations;
    • Matplotlib to plot the results.

    As usual, we start by creating a virtual environment. For this I used Python 3:

    virtualenv -p python3 venv
    source venv/bin/activate
    

    And installed the dependencies in our virtual environment:

    pip install requests matplotlib pandas
    

    We start with the initial dependencies and download the stats from WP plugin repo:

    import pandas as pd
    import json, time, requests
    import matplotlib.pyplot as plt
    from matplotlib import style
    
    url = 'https://api.wordpress.org/stats/plugin/1.0/downloads.php?slug=akismet&limit=365'
    r = requests.get(url)
    data = r.json()
    

    data now contains a dict with dates for keys, and number of downloads for values. DataFrame expects a list-like object for data parameter, so we will need to reshape the data first from dict to a list of dicts. While we’re at it, we’ll convert date strings into date objects:

    data = [
        { 'timestamp': pd.to_datetime(timestamp, utc=False), 'downloads': int(downloads) }
        for (timestamp, downloads) in data.items()]
    data = sorted(data, key=lambda item: item['timestamp'])
    

    At this point we have converted the initial stats dictionary:

    {'2017-03-21': '36723', '2016-08-17': '18061', '2016-06-26': '17821', '2016-09-14': '52539', ...}
    

    Into a list of dictionaries:

    [{'timestamp': Timestamp('2016-06-25 00:00:00'), 'downloads': 18493}, ...]
    

    However, if we pass the resulting list into a Pandas DataFrame, we will see that it is indexed by row number, instead of by date:

    >>> pd.DataFrame(data)
         downloads  timestamp
    0        18493 2016-06-25
    1        17821 2016-06-26
    2        29163 2016-06-27
    3        28918 2016-06-28
    4        27276 2016-06-29
    ...
    

    To fix that, we reindex the resulting DataFrame on timestamp column:

    df = pd.DataFrame(data)
    df = df.set_index('timestamp')
    

    And get:

    >>> df
                downloads
    timestamp            
    2016-06-25      18493
    2016-06-26      17821
    2016-06-27      29163
    2016-06-28      28918
    ...
    

    Now that we have our raw data, let’s calculate the 7 day average. DataFrame.rolling() works nicely for this:

    weekly_average = df.rolling(window=7).mean()
    

    The way it works is it takes a number of sequential data points (in our case - 7 days) and performs an aggregate function over it (in our case it’s mean()) to create a new DataFrame:

    >>> weekly_average
                   downloads
    timestamp               
    2016-06-25           NaN
    2016-06-26           NaN
    2016-06-27           NaN
    2016-06-28           NaN
    2016-06-29           NaN
    2016-06-30           NaN
    2016-07-01  24177.857143
    2016-07-02  23649.857143
    2016-07-03  23186.857143
    2016-07-04  22186.428571
    ...
    

    We can use the same approach to calculate weekly sums:

    >>> weekly_sum = df.rolling(window=7).sum()
                downloads
    timestamp            
    2016-06-25        NaN
    2016-06-26        NaN
    2016-06-27        NaN
    2016-06-28        NaN
    2016-06-29        NaN
    2016-06-30        NaN
    2016-07-01   169245.0
    2016-07-02   165549.0
    2016-07-03   162308.0
    ...
    

    Now onto plotting. We want to display all 3 series (raw downloads, weekly averages and weekly sums) on a single plot, to do that we create a subplot:

    # Setting the style is optional
    style.use('ggplot')
    fig, ax = plt.subplots()
    

    fig is the figure, and ax is the axis.

    We could call the df.plot() command and it would work, but before we do that let’s set up more presentable column names for the legend:

    df.columns = ['Daily downloads']
    weekly_average.columns = ['Weekly average']
    weekly_sum.columns = ['Weekly sum']
    
    ax.set_xlabel('Time')
    ax.set_ylabel('Downloads')
    ax.legend(loc='best')
    

    And lastly we plot the data:

    df.plot(ax=ax, label="Daily downloads", legend=True, style='r-', title="Downloads")
    weekly_average.plot(ax=ax, label="7 day average", legend=True, style='b-')
    weekly_sum.plot(ax=ax, label="7 day sum", legend=True, style='r--')
    plt.show()
    

    For line styles, r- refers to a solid red line, r-- is a dashed red line, and b- is a solid blue line. You can refer to lines_styles.py example and lines styles reference for more details on styling.

    Results

    Akismet downloads plotted with Python

    Visually perhaps it looks somewhat similar to the LibreOffice Calc variant, but with some extra effort we could generate graphs for a much larger number of plugins, add extra data or metrics, generate image files. And now we can use Matplotlib tools to zoom in and out of different sections while looking at specific details in the plot.

    The final code looked like this:

    import pandas as pd
    import json, time, requests
    import matplotlib.pyplot as plt
    from matplotlib import style
    
    url = 'https://api.wordpress.org/stats/plugin/1.0/downloads.php?slug=akismet&limit=365'
    r = requests.get(url)
    data = r.json()
    data = [
        { 'timestamp': pd.to_datetime(timestamp, utc=False), 'downloads': int(downloads) }
        for (timestamp, downloads) in data.items()]
    data = sorted(data, key=lambda item: item['timestamp'])
    
    df = pd.DataFrame(data)
    df = df.set_index('timestamp')
    
    weekly_average = df.rolling(window=7).mean()
    weekly_sum = df.rolling(window=7).sum()
    
    style.use('ggplot')
    fig, ax = plt.subplots()
    
    df.columns = ['Daily downloads']
    weekly_average.columns = ['Weekly average']
    weekly_sum.columns = ['Weekly sum']
    
    ax.set_xlabel('Time')
    ax.set_ylabel('Downloads')
    ax.legend(loc='best')
    
    df.plot(ax=ax, label="Daily downloads", legend=True, style='r-', title="Downloads")
    weekly_average.plot(ax=ax, label="7 day average", legend=True, style='b-')
    weekly_sum.plot(ax=ax, label="7 day sum", legend=True, style='r--')
    plt.show()