Ellen Shapiro

Rebasing Onto A Squashed Commit

When you’re using Github’s nice “Squash and Merge” button (or the GitLab equivalent, or git merge --squash) to squash all the commits in a pull request rather than simply merging them, your commit history can get really screwy if you try to rebase off of the merged commit.

Let’s say I have master branch, that I make branch fix. off of to fix some small bugs. I make commits m, n, and o on that branch. Then from o, I create branch feature, and make commits p, q, and r, that implement some new feature that depends on the previous fix. Visually, it’d look something like this:

        | (branch feature)
        | commit r
        | commit q
        | commit p
     | (branch fix)
     | commit o
     | commit n
     | commit m

Next, I’d make a PR from branch fix into master:

          | (branch feature)
          | commit r
          | commit q
 PR       | commit p
  \      /
   \    /
    \ /
     | (branch fix)
     | commit o
     | commit n
     | commit m

Once that PR is merged, you wind up with something which looks something like this:

 | (branch feature)
 | commit r
 | commit q
 | commit p
 | commit o
 | commit n
 | commit m
  \     (master)
   \   / including the squashed commits of `fix` branch: m, n, and o

If I merge branch feature into master right now (without squashing), while the file changes will be correct, suddenly the previously squashed commits of the fix branch will show up again in the master branch history. This is unfortunate, as I wanted to get rid of those.

Alternatively, if I merge branch feature with squashing, the commit messages of the squashed commits of the fix branch will be included in the commit message of the squashed commit in master. While I can remove that manually, it’s a hassle and error-prone.

If I try to do a vanilla rebase before the merge, rebase will attempt to apply the changes from every single commit - including commits m, n, and o, which have already been squashed and merged.

This is because the metadata about the individual commits that made up the squashed merge are gone. In fact, this is the only difference between a squashed merge and a normal merge: Both put the merged commits on top of the destination branch, but while a normal merge does this in a special merge commit that includes metadata about the commit hashes of the branches that have been merged, a squash merge omits that metadata, and “pretends” the merge commit is a normal commit.

Now, if you’ve got a branch which you created off of the commits which were squashed and merged, using a plain vanilla rebase command will attempt to apply every one of those commits again, sequentially.

This gets annoying very quickly. I asked in our Slack if anyone knew a good way around this. A few people banged their heads together, and we wound up with an answer that takes the commits at the end of branch feature, which haven’t been merged yet, and make them do something like this:

 | PR
 | (branch feature)
 | commit r
 | commit q
 | commit p
  (master) including squashed commits of `fix` branch

There’s a slightly obscure git rebase sub-command for this. git checkout the branch you want to pick a bunch of commits off of and plop them onto master. Then, you can pass in the magic commands:

git rebase --onto master [hash for commit o] [hash for commit r]

What this does is tell Git that it should rebase a range of commits on to master. Note that you actually need to start with the commit before those you wish to pick up and move over onto master, so that all the commits are picked up.

One thing to note is that this exact command will cause the result to appear as a detached HEAD rather than as HEAD of the branch you have checked out. You’ll need to create an updated branch from there, as your original feature branch will still be as it was (which is slightly annoying, but VERY helpful if you mess this up). Alternatively, you can checkout feature again, and reset --hard that branch to the hash of the detached head.

Now, you’re able to make a PR that looks like the most recent ASCII art above, and have a clean commit history while still taking advantage of branching, squashing, and merging.

Colin Dodd

Introducing SingleLiveEvent

One of the nice things about LiveData being lifecycle aware is that you don’t need to worry about Views having the latest data from the ViewModel; even after rotate! Sometimes however, that can cause unforeseen issues.

When an Android device is rotated, the View is killed and created again. At that point it is up to the developer to ensure that data is not lost during rotation. If you use LiveData this problem goes away; when the View is recreated it connects to the same LiveData and the previously cached results are returned to the View.

Consider a simple example, where clicking a Button causes an action to happen in the ViewModel. The action can succeed or fail. The View observes a LiveData that broadcasts the success state of the action.

A failure state

In this example, after the action is triggered it fails for some reason. The ViewModel sends out the failure state, and the View which is observing changes receives this failure state. The View decides to handle this failure state by showing a Toast.

So far, so good - there doesn’t seem to be an issue. However, what happens when the device is rotated?

The LiveData rotation bug


Now every time we rotate the device, the Toast is shown! When the View is recreated, it attaches to the ViewModel, which broadcasts the cached LiveData failure state. As long as we keep rotating the device the toast will keep showing.

SingleLiveEvent to the rescue

In this instance, what we want instead is a LiveData that can only be observed once. No caching, once the value is observed it is gone forever. This is exactly what SingleLiveEvent does. It will only broadcast new values to new subscribers. In practice this means it is useful in situations where you don’t want LiveData such as the example above.

It also handles the situation where there are multiple subscribers. Each subscriber will get the event once. It is only new subscribers that are affected.

SingleLiveEvent is not part of Android Architecture Components, but it is part of the official Architecture Components sample code. Google I/O 2018 is approaching so it will be interesting to see what will happen with the Arch framework.

Philipp Gross

Turn your selfie into a LEGO® brick model

Use volumetric regression networks to convert a photo of your face into a 3D voxel model, and then apply stochastic optimization to create LEGO® build layouts.

A few weeks ago, we had the idea to make an app that allows the users to scan an object with their smartphone and convert the photos to a 3D model that can be built with LEGO® bricks. In the following we describe the computer vision and machine learning technologies that were involved in this experiment.

3D reconstruction with volumetric regression networks

Given a series of 2D views of an object as input and mapping it onto a 3D model as output is a classical problem in computer vision also known as Multi View Stereo Reconstruction (MVS). Every solution makes different kinds of assumptions, the most prominent one is scene rigidity, which means that no moving or deforming objects are present within the scene of interest. Other assumptions, which are hard to come by, include the material, intrinsic camera geometry, camera location, camera orientation and light conditions. If these are not known, the problem is ill-posed since multiple combinations can produce exactly the same photographs. In general, the reconstruction requires complex pipelines and solving non-convex optimization-problems.

With the recent advent of deep learning techniques in 3D reconstruction, a promising approach to solving problems like this is to train deep neural networks (DNN). Given a large amount of training data these algorithms have been quite successful in a variety of computer vision applications, including image classification and face detection.

Since 3D reconstruction is in general a difficult problem, we decided to reduce the object category to a category which has been extensively studied before, and which is fun to play with. In 2017 Aaron Jackson et al. published an impressive article 1 where they introduced Volumetric Regression Networks (VRNs) and applied them to face reconstruction. They showed that a CNN can learn directly, in an end-to-end fashion, the mapping from image pixels to the full 3D facial structure geometry (including the non-visible facial parts) with just a single 2D facial image.

vrn network The proposed VRN is a CNN architecture based on two stacked hourglass networks, which use skip connections and residual larning. It accepts as input an RGB input of shape (3, 192, 192) and directly regresses a 3D volume of shape (200, 192, 192). Each rectangle is a residual module of 256 features. (© Aaron Jackson et al.).

Generously, Jackson et al. also published their code code and a demo based on Torch7. Additionally, Paul Lorenz was so kind to contribute the transfer of the pre-trained VRN model to Keras/Tensorflow with his vrn-torch-to-keras project. This makes loading the model quite simple:

import tensorflow as tf
from tensorflow.core.framework import graph_pb2

def load_model(path, sess):
    with open(path, "rb") as f:
        output_graph_def = graph_pb2.GraphDef()
        _ = tf.import_graph_def(output_graph_def, name="")
    x = sess.graph.get_tensor_by_name('input_1:0')
    y = sess.graph.get_tensor_by_name('activation_274/Sigmoid:0')
    return x, y

sess = tf.Session()
model = load_model('vrn-tensorflow.pb', sess)

We load an input image with Pillow and Numpy:

from PIL import Image as pil_image
import numpy as np

def load_image(f):
    img = pil_image.open(f)
    img = img.resize((192, 192), pil_image.NEAREST)
    img = np.asarray(img, dtype=np.float32)
    # The shape is (192, 192, 3), i.e. channels-last order.
    return img

You should only use quadratic images, otherwise the scaling will distort the proportions.

Now, we have everything we need to run the reconstruction:

def reconstruct3d(model, img, sess):
    x, y = model
    # Change order to channels-first
    img = np.transpose(img, (2, 0, 1))

    vol = sess.run(y, feed_dict={x: np.array([img])})[0]
    # vol.shape = (200, 192, 192)

    # Convert image back to original order
    img = np.transpose(img, (1, 2, 0))
    return vol

The output is just a numpy array of dimension 3 where positive values indicate the voxel position (voxels are the generalization of pixels to the three dimensional space). You can use raw2obj.py to convert it into a colored mesh and write it as a OBJ-File for further processing. This simple text file format is understood by various 3D editing tools and libraries. We use three.js to render it with WebGL in the browser:

voxels The input image (left) and rendered output mesh (middle and right).

Obviously, the vrn can’t handle glasses, but the results are nevertheless impressive.

Brick model construction

Having a solution to the 3D reconstruction problem at hand it remains to find a LEGO® build layout that approximates the 3D body out of a limited set of pieces. This is also known as legoization or brickification.

The first step is to go back to a voxel representation. If the voxels are mapped onto 1x1 LEGO® bricks, the model doesn’t stand in general. So voxels of similar color have to be merged to bigger bricks until a stable structure consisting of one connected component is found. In general, this is a hard combinatorial optimization problem. It was twice openly presented by engineers from the LEGO® company in 1998 and 2001 2, and different solutions were proposed by using stimulated annealing 3, evolutionary algorithms 2, or graph theory 4.

In our case, we are lucky that the shape of the face mesh is just a deformed ball. So, the problem shouldn’t be that difficult to solve. First, we rasterize the face mesh with some fixed resolution in order to get voxels:

voxels Voxels for three different resolutions and counts 563, 3830, 16552 (from left to right).

Even though the basic bricks are available in many colors at the pick-a-brick LEGO® store, the color space is much smaller than the full RGB space.

voxels Selection of LEGO® colors (29): Black, Brick Yellow, Bright Blue, Bright Green, Bright Orange, Bright Purple, Bright Red, Bright Reddish Violet, Bright Yellow, Bright Yellowish Green, Cool Yellow, Dark Brown, Dark Green, Dark Orange, Dark Stone Grey, Earth Blue, Earth Green, Flame Yellowish Orange, Light Purple, Medium Azur, Medium Blue, Medium Lilac, Medium Stone Grey, Olive Green, Reddish Brown, Sand Green, Sand Yellow, Spring Yellowish Green, White.

Since we are interested in building a real-life object instead of just a virtual model, we need to convert the colors with minimal perceptual loss. For that, we map the original colors into the Lab color space and choose the nearest neighbor LEGO® color by using the Delta E 2000 color difference.

voxels Color mapping to 29 LEGO® colors, by using the L2 metric in RGB space, or Delta E 76, Delta E 94 and Delta E 2000 color differences in Lab space (from left to right).

The resulting conversion is not optimal yet, but good enough to keep going.

As we increase the resolution the number of voxels grows cubically which complicates the combinatorial problem and slows down the rendering. Therefore we carve out the inner invisible voxels and just keep a thin shell. Moreover, it suffices to drop the back of the face mesh because the front part already contains the facial geometry.

voxels Carved voxels with shell size 3. Only the visible voxels are colored.

The upshot of the reduced color palette is that we can merge the 1x1 bricks into larger bricks of the same color which will increase the stability and stiffness of the model. For simplicity, we work only with the basic brick types (1x1, 1x2, 1x3, 1x4, 1x6, 1x8, 2x2, 2x3, 2x4, 2x6, 2x8). As a first naive optimization algorithm, for each layer we merge repeatedly random adjacent bricks if the merged brick is admissable and if all underlying visible voxels have the same color.

Since this algorithm processes each layer independently, it doesn’t take into account the overall structure so that some bricks might be disconnected. In order to minimize this effect, for each layer, and each brick we chose to maximize the number of bricks below it connects with, and at the same time minimize the total number of bricks. This gives rise to a cost function that can evaluate any brick layout solution.

Now, we repeat our initial algorithm and replace a solution whenever the cost goes down. This meta-algorithm is also known as random-restart hill climbing. As a final postprocessing step, we compute the connected components of the whole brick layout and remove those that are disconnected from the ground. In most cases this gives an approximate brick layout which seems to be good enough.

voxels Result after 20 iterations. It has three connected components: A tiny part on the front marked as green (left), a tiny invisible part (middle) and the main component (right).

voxels Primary connected component, rendered with knobs.


Given the fantastic VRN models, it is quite easy to create a LEGO® layout from a single selfie. While the color conversion is far from perfect, it works very well for grayscale pictures or faces that are already close to the LEGO® colors.

Next, we are going to build a real life example and see how well our layout algorithm works in practice!


  1. Jackson, Aaron S and Bulat, Adrian and Argyriou, Vasileios and Tzimiropoulos, Georgios. Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression. International Conference on Computer Vision. 2017. 

  2. Petrovic, Pavel: Solving the LEGO brick layout problem using evolutionary algorithms. Tech. rep., Norwegian University of Science and Technology, 2001.  2

  3. Gower, Rebecca A H and Heydtmann, Agnes E and Petersen, Henrik G. LEGO: Automated Model Construction. 1998. 

  4. Testuz, Roman and Schwartzburg, Yuliy and Pauly, Mark. Automatic Generation of Constructable Brick Sculptures. 2013. 

Colin Dodd

Keeping .observe() out of the ViewModel

If you’ve been using Android Architecture components you’ve probably got a nice separation of concerns. A View, doing everything UI; a Model, keeping control of your data sources; and a ViewModel, shuffling data between the two. LiveData makes it trivial to communicate between these layers, but how do you consume a LiveData in the ViewModel when you don’t have a LifecycleOwner?

One of the big advantages of LiveData is that they are lifecycle aware. When you observe a LiveData you send in a LifecycleOwner and it ensures that the observers are only informed of changes if they are in an active state. It’s great, you can observe and not have to worry about memory leaks, crashes, or stale data.

It’s trivial when observing a ViewModel’s LiveData in the View; but what about the Model to the ViewModel? In my previous article I spoke about how you can use NetworkBoundResource in the Model to return cached database data whilst fetching fresh data from the network. NetworkBoundResource exposes its data through a LiveData which means the ViewModel needs to be able to observe it; but a ViewModel is not a LifecycleOwner nor does it have a reference to a LifecycleOwner. So how can a ViewModel observe these changes?

The wrong solution

There is a method called observeForever - it will allow you to observe without passing a LifecycleOwner but it means the LiveData is no longer lifecycle aware. To stop observing you’ll need to remember to call removeObserver.

I’ve made use of observerForever in tests, but never in production code. I would be interested in hearing under what circumstances observerForever should be used. When it comes to observing in the ViewModel, there is a better solution.

The right solution

The Transformations class is what you need to keep observe out of the ViewModel. With its map and switchMap methods you can consume LiveData from the Model, and transform that LiveData into something which the View can observe.

Say you are developing an email app and you want to show the total number of unread messages. The Model only allows you to get a list of unread emails. The ViewModel however can transform the data from the Model into the unread counter that the View needs.

fun getNumberOfUnreadMessages(): LiveData<Integer> {
   return Transformations.map(model.getUnreadMessages(), { it.size })

model.getUnreadMessages is returning a LiveData<List<Email>> which is being mapped to the size of the list. This method returns a LiveData<Integer> which can be consumed by the View. No need to observe in the ViewModel.

switchMap is useful if you need to send a parameter from the View to the ViewModel. Imagine the situation where the user can search through their email with a search query.

fun searchEmail(query: String): LiveData<List<Email>> {
    return model.searchEmail(query)

The problem with the above code is that every time the View calls searchEmail a different LiveData will be returned, so the View will need to keep detaching itself and attaching itself to the LiveData returned from this function. By using switchMap we can ensure that the same LiveData is returned and the View doesn’t need to do anything special at all.

private val userInputtedQuery = MutableLiveData<String>()

fun searchEmail(query: String) {
    userInputtedQuery.value = query

val searchResult = Transformations.switchMap(userInputtedQuery, { model.searchEmail(it) })

By using switchMap we can observe changes to one LiveData, and trigger the calling of another function. The LiveData is transformed and returned to the View where it can be observed; avoiding having to call observe in the ViewModel.


Make use of the Transformations class to keep observe out of your ViewModel. This keeps your LiveData lifecycle aware and gives you all the benefits that entails.

Colin Dodd

Neat Android Architecture through NetworkBoundResource

One of the joys of working at Bakken & Bæck is that you regularly get the chance to click “New Project” in Android Studio. Last time I got to do that, it was decided that we’d do things using Googles Architecture Components. Rx was out; LiveData was in.

This isn’t going to be a piece comparing the two technologies. Instead I am going to talk about a small utility class NetworkBoundResource and how it can help you architect your apps.

In keeping unopinionated about how to implement architecture, NetworkBoundResource isn’t even a part of the Android framework despite being mentioned in the official guide to app architecture. Instead it exists as part of the samples repo for architecture components.

So, what does it do?

Simply, it allows you to return data from database whilst simultaneously fetching the latest data from the network. When the network call is returned, the result can be stored to database and the new result can be broadcast.

To put that in more concrete terms; say you have an app that fetches the weather for the users current location. The first time the user opens the app there is obviously nothing in the database, so you’ll need to request the latest weather information from the network. Once you’ve fetched the weather information you can show it to the user.

It makes sense to cache that weather information to a database. The next time the user requests the weather information it can then be read from the database. This avoids a network call and works offline.

However, the weather obviously changes so if your user is always seeing the weather thats cached in the database your app isn’t going to be very useful. You’ll want to add some logic that decides how long the information in the database should be considered fresh. Once the information is no longer fresh a new network call can be started to get the latest weather information.

Network calls can fail, or be slow, so ideally you’ll also show the old weather information whilst the network call is happening. When the network call is complete the cached information can be replaced with the latest information and the UI can be refreshed.

All of this logic makes for an app that works offline, feels quick, and is kept up to date. NetworkBoundResource will give you all of this functionality and all it requires is the implementation of 4 methods:

  • loadFromDb is used to return whatever information is stored in the database.
  • shouldFetch decides whether the cached data is fresh or not. If not a new network request will be triggered.
  • createCall creates the network request. NetworkBoundResource will take responsibility for triggering the request.
  • saveCallResult saves the network response into the database. Any mapping of the response into a model object before storing can be done here.

For more concrete implementation details check the section on exposing network status in the official app architecture guide.

Under the hood

NetworkBoundResource works by making use of MediatorLiveData. In essence MediatorLiveData can observe multiple LiveData objects and react to their changes. In this instance there are two LiveData sources; one for the database and one for the network. Both of those LiveData are wrapped into one MediatorLiveData and it is that which is exposed by NetworkBoundResource. More information about MediatorLiveData can be found in the Arch documentation

Gunnar Aastrand Grimnes

Down the debugging rabbit-hole

The story of a 4 hour fun debugging adventure with pytests and tornado

We use Tornado as an async webserver for our python projects, and often pytest for testing.

The two of them come together nicely in pytest-tornado, which gives you pytest-marks for async/coroutine tests and pytest-fixtures for setting up/tearing down your application.

So, we set off to write some tests for a new project, we first added a login test:


def test_login(http, login_credentials):

    r = yield http.post('/api/login', body=login_credentials)
    return = {"Cookie": str(r.headers["Set-Cookie"])}

It passes! Great!

Now, we added a test of upload a schema, the details don’t matter, it posts some JSON. Since it has to login first, we reuse the login method, which already returns the cookie we need:


from test.test_login import test_login

def test_schema(http):

    headers = yield test_login(http)

    http.post('/api/schema', body=[...], headers=headers)

    [... actually test something ...]

It also works!

Next test: test_answers – again the details don’t matter, it logs in, makes some HTTP requests and tests some things.


from test.test_login import test_login

def test_schema(http):

    headers = yield test_login(http)

    http.post('/api/answers', body=[...], headers=headers)

    [... actually test something ...]

Aaaand…. it fails with:

>       yield test_login()

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
env/lib/python3.6/site-packages/tornado/gen.py:1055: in run
    value = future.result()
env/lib/python3.6/site-packages/tornado/concurrent.py:238: in result
<string>:4: in raise_exc_info
env/lib/python3.6/site-packages/tornado/gen.py:1143: in handle_yield
    self.future = convert_yielded(yielded)
env/lib/python3.6/functools.py:803: in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

yielded = <generator object test_login at 0x102420728>

    def convert_yielded(yielded):
        """Convert a yielded object into a `.Future`.

        The default implementation accepts lists, dictionaries, and Futures.

        If the `~functools.singledispatch` library is available, this function
        may be extended to support additional types. For example::

            def _(asyncio_future):
                return tornado.platform.asyncio.to_tornado_future(asyncio_future)

        .. versionadded:: 4.1
        # Lists and dicts containing YieldPoints were handled earlier.
        if yielded is None:
            return moment
        elif isinstance(yielded, (list, dict)):
            return multi(yielded)
        elif is_future(yielded):
            return yielded
        elif isawaitable(yielded):
            return _wrap_awaitable(yielded)
>           raise BadYieldError("yielded unknown object %r" % (yielded,))
E           tornado.gen.BadYieldError: yielded unknown object <generator object test_login at 0x102420728>

env/lib/python3.6/site-packages/tornado/gen.py:1283: BadYieldError


So there must be a stupid error, we check for typos, we go back and start copy/pasting code from the working test_schema to make sure we didn’t type @pytest.mark.test_gen or something. The failure remains.

After a while we reach the state where test_schema.py and test_answer.py is byte-for-byte identical, but answers fails and schema passes. We go home and rethink our lives.

Next day, we realise that when called on just one of those files, pytest will run TWO tests, it will find the test_login through the import as well as the test in the files we invoke on. And the order will be different, it will order the file alphabetically - so in case of test_answers it will first run that test, then test_login, but for test_schema the login test will run first.


Renaming test_answers to test_manswers (sorted after login) confirms it, it then works.

But why does the order matter? Digging a bit deeper, we see that the value returned from test_login is in both cases of type generator. But Tornado is happy with one of them, but not the other. In the convert_yielded function (which among other things lets tornado also work with await/async generators), Tornado uses inspect.isawaitable to check if the passed generator can actually be a future. This is False when the test fails.

This is the code for isawaitable:

def isawaitable(object):
    """Return true if object can be passed to an ``await`` expression."""
    return (isinstance(object, types.CoroutineType) or
            isinstance(object, types.GeneratorType) and
                bool(object.gi_code.co_flags & CO_ITERABLE_COROUTINE) or
            isinstance(object, collections.abc.Awaitable))

It’s the co_flags line that causes out problem - in the working case, the flag for being an iterable coroutine is set. co_flags is pretty deep in the python internals, containing a number of flags for the interpreter (the inspect docs has the full list). Our CO_ITERABLE_COROUTINE flag was added in in PEP492, which says that:

The [types.coroutine()] function applies CO_ITERABLE_COROUTINE flag to generator- function’s code object, making it return a coroutine object.

And here the rabbit hole ends! We can inspect types.coroutine:

        # Check if 'func' is a generator function.
        # (0x20 == CO_GENERATOR)
        if co_flags & 0x20:
            if func.__name__ == 'test_b':
                import ipdb; ipdb.set_trace()
            # TODO: Implement this in C.
            co = func.__code__
            func.__code__ = CodeType(
                co.co_argcount, co.co_kwonlyargcount, co.co_nlocals,
                co.co_flags | 0x100,  # 0x100 == CO_ITERABLE_COROUTINE
                co.co_consts, co.co_names, co.co_varnames, co.co_filename,
                co.co_name, co.co_firstlineno, co.co_lnotab, co.co_freevars,
            return func

And there the function __code__ object is modified in place, setting the flag! Setting a breakpoint there lets us see that pytest-tornado calls tornado.gen.coroutine on our function, which in turn calls types.coroutine:

    # On Python 3.5, set the coroutine flag on our generator, to allow it
    # to be used with 'await'.
    wrapped = func
    if hasattr(types, 'coroutine'):
        func = types.coroutine(func)

And this is how the test_login function only works if once called first as a pytest.


In the end, that’s the explanation, but there is no real solution - we cannot rely on having the tests in alphabetical order, so we move the reusable code out to it’s own function:

def test_login(http, login_credentials):
    return ( yield do_login(http, login_credentials) )

def do_login(http, login_credentials):
    r = yield http.post('/api/login', body=login_credentials)
    return = {"Cookie": str(r.headers["Set-Cookie"])}

Then we only import the do_login function elsewhere. You were probably not meant to reuse actual test functions like this anyway.

In fact, in python2 there is no isawaitable and both tests will fail, and only in python3 do you get this weird only if in the right alphabetical order bug.

That’s it! 4 hours of weirdness later. Note that normally Pytest + tornado are actually pretty good friends! Next time we’ll write a blogpost about how well it works!

Ezekiel Aquino


Most of the time, SVGs come with a lot of cruft

This might be caused by a variety of reasons: messy/unstructured layers and groups, there might be stray raster images, or just the exporter naturally embedding its own fingerprint on it. For cases like these you might be surprised by how much you can shave off an SVG, and how much cleaner the markup can be specially if you have to work with it. Who doesn’t want something cleaner and lighter?

When using SVG assets then I suggest passing it through Jake Archibald’s SVGO online compressor a nice online tool to optimise your SVGs. You can also add it to your build setup or do it via Sketch plugins.

There’s a bunch of settings you can fiddle around with and get the best settings for what you are trying to achieve; you don’t want to “collapse useless groups” if you’ve structured the groups for purposes of animation, for example.

Colin Dodd

True, False, ¯\_(ツ)_/¯

If this is true, do this. If this is false, do that. But what about null?

If the user is logged in, show the “My Profile” button. If the user isn’t logged in, show the “Log In” button. It’s such a common thing to do that I’ve probably done it thousands, if not tens of thousands of times. It’s so common there are even jokes about it.

One of the major advantages of Kotlin is that it distinguishes between nullable objects and non-nullable objects. This includes the Boolean type:

private fun example() {
    val isLoggedIn = tryLogin()
    if (isLoggedIn) { /**/ }

// returns: Nullable Boolean
private fun tryLogin(): Boolean? { /**/ }

Except, this doesn’t work because isLoggedIn can now be in three states, true, false, and null. Whereas the if statement only works with non-nullable booleans. We can rectify this by explicitly checking the state:

if (isLoggedIn == true)

This works, and is readable, but when code reviewing it can be hard to distinguish this from a new beginner mistake, causing it to be incorrectly flagged as an error. That said, this seems to be the recommended suggestion from the Kotlin style guide.

An alternative that may be preferable, is to use when:

java when(isLoggedin) {  true -> {}  false -> {}  null -> {} }

This works well if you want to branch based on the state of the boolean, but perhaps is a bit too verbose if you only want to do something in only one of the booleans states.

Colin Dodd

Extending the View

Now you see me, now you don’t. Improving Android visibility with Kotlin extension functions

Extension functions are one of the most powerful features of Kotlin. They allow for classes to be extended with your own logic no matter how locked down the class is.

This gave me the opportunity to fix one of my personal pet peeves in Android.

Views have three states of visibility. VISIBLE, the view can be seen. INVISIBLE, the view can not be seen but it takes up the same amount of space as it would have done were it visible. GONE, the view can not be seen and it takes up no space.

So many times I have ended up in a situation where I want to set the visibility of a view based on the state of a boolean.

if (isLoggedIn) {
    button.visibility = View.VISIBLE
} else {
    button.visibility = View.GONE

Since views have three states there is no easy say to just map a boolean to visibility state. There is no setVisibility(true) – there has always had to be some wrapping logic. Well now that extension functions exist I can fix that once and for all:

private fun View.isVisible(shouldShow: Boolean) {
    visibility = if (shouldShow) View.VISIBLE else View.GONE

private fun example() {

Finally! Now I can just call isVisible on any view with a boolean and the visibility of the view will be set to either VISIBLE or GONE. Of course, if you actually want views to become INVISIBLE, then you’ll need a second extension function. Or you could give your extension function a default argument:

private fun View.isVisible(bool: Boolean?, nonVisibleState: Int = View.GONE) {
    visibility = if (bool == true) View.VISIBLE else nonVisibleState

private fun example() {
    // this defaults to View.GONE

    // this uses View.INVISIBLE
    button2.isVisible(isLoggedIn, View.INVISIBLE)