27 Feb 2020

User demo walkthrough

History / Edit / PDF / EPUB / BIB / 2 min read (~247 words)

What should be defined to make a user demo walkthrough successful?

You need to define what you want to learn from the demo walkthrough: where does the user ask questions? where does he stay stuck? what is easy/hard for him to do? what does he think about when he goes through the demo? what is/isn't working? what frustrates him? where does the user want to have more guidance?

The user doing the walkthrough should be as close as possible to the ideal user otherwise you may get feedback that is biased on their own experience. A user with too much knowledge compared to your target user will be able to do many things your target user may need help with and they may assume a lot of things because they know about them. On the other hand, a user with too little knowledge will require help in many places where the target user is expected to have knowledge, which may make the demo walkthrough slower than desired.

The walkthrough should have a clear scenario. You may only give an initial setup to the user and a desired goal and let them figure everything out by themselves. You may also go with a more directed approach, where you tell them what to do and you see if the instructions are clear enough to accomplish the steps. The first approach is interesting because it allows you to observe variability in how to solve a problem.

27 Feb 2020

Identifying python files with no coverage

History / Edit / PDF / EPUB / BIB / 1 min read (~127 words)

I use pytest with coverage and I want to see the files that have no coverage.

It appears that pytest and pytest-cov will not list someof the files that are under namespace packages, while it will work fine for files in regular packages (see PEP 420 on the topic of implicit namespace packages).

To fix this problem, one solution is to add __init__.py files in all of your directories in order to create regular packages.

If you are using PyCharm Professional, you can simply run your test with coverage. This will allow you to identify all the files that have currently no coverage as they will appear with coverage = 0%.

26 Feb 2020

Accelerate slow pytests

History / Edit / PDF / EPUB / BIB / 2 min read (~287 words)

My pytests take a while to complete, how can I speed up the process?

A fairly cheap solution is to use parallelization to run your tests on multiple CPUs instead of the 1 cpu used by default. To do so, you can install pytest-xdist. Once the extension is specified, all you need to do is add -n auto when you call pytest.

Another thing you should do that requires more effort is to investigate which of your tests are consuming a lot of CPU time to execute. To do so, use the --durations=0 flag when you call pytest. A report will be generated after your tests have run that lists how long setting up, running and tearing down each specific test took. The list is ordered from longest to shortest durations, meaning that the tests that have the most potential for being optimized will be at the top. You should focus on these tests because the longest one will determine how long it would take to run your tests even if you had an infinite amount of CPU cores.

Investigate why certain tests take a while to execute.

  • Are some tests computing something that takes a while and is computed exactly the same way by multiple tests? Precompute this result once and share it between the different tests (think of it as a fixture).
  • Are calls to a slow external API done? If you are not testing that the remote API is changing, store example responses and emulate receiving them.
  • Is there a loop in the test that runs hundreds of thousands iterations while the same test could be executed with only a thousand iterations?

26 Feb 2020

Working on the wrong task

History / Edit / PDF / EPUB / BIB / 2 min read (~357 words)

How can you tell when you're working on the wrong task?

You may be working on the wrong task because priorities have changed. To determine if that is the case, you should ask yourself whether completing the task provides value, either to you or your users. If the answer is no, drop the task. If the answer is yes, you should determine if it is the utmost important task at the moment. If the answer is no, then figure out which task is. If the answer is yes, then proceed.

You may be working on the wrong task because you don't have the necessary information to complete the time in an appropriate amount of time. If you find yourself spending most of your time gathering information instead of accomplishing the task that should be done with the information you need, then it may not be the right time to do the task yet. You may have to create a prior task which is to acquire the necessary knowledge to execute the original task.

If you notice that to complete your task there are pre-requisites that should have been completed, then you should work on those instead of the task with those dependencies. In some cases you may realize that you can't accomplish a task because you don't have the tooling necessary or the technology to accomplish the task is not available yet.

As I suggest in my article Given that you define a ROI on a task, when should you stop working on a task and abandon it given its cost?, you should estimate how long you expect for a task to take. At the half time, you should evaluate whether you'll be able to complete the task by the estimate's deadline. If you can't, then you should either drop the task (if you can), or look for alternative ways to get the task completed, either by asking a more experienced person or by simplifying the task.

25 Feb 2020

Introducing mypy in code with lots of issues

History / Edit / PDF / EPUB / BIB / 2 min read (~282 words)

I want to include mypy as part of my CI pipeline but my existing code contains a lot (> 100, but < 500) of issues. How can I get started?

Create a minimalist configuration of mypy such that it will list issues that need to be fixed and return a non-zero exit code. Based on the problem definition, we assume that at this step you have more than 100 issues that are listed and that fixing those issues will take many hours you'd rather invest in improving the code than to fix typing issues.

Add a step in your CI pipeline that runs mypy and list all those issues. Verify that it indeed breaks the build.

Once you've satisfied yourself that CI fails, we will "fix" the mypy issues by adding the #type: ignore and/or # noqa comment after the offending lines with issues. This will have the effect of resolving all the currently found mypy issues, such that mypy should now return a zero exit code. With this, any future code that fails to pass the mypy check will break the build. This will allow you to use mypy from this point forward to check your types.

I suggest adding an additional comment such as # FIXME: TICKET-ID, where TICKET-ID refers to the id of a ticket in your issue tracking system that explains that you need to take care of this technical debt.

Always prefer to fix the issues instead of ignoring them. However, also consider whether fixing those issues is an appropriate use of your time when you want to introduce mypy (which should be as soon as possible in my opinion).