Blenderaddons ng logo Blender Python Test Status Coverage

blenderaddons-ng

A Visual Studio Code project to develop, test, and profile Blender add-ons.

background

I would like to focus a bit more on automated testing of Blender add-ons. And I am not talking about interactively testing an add-on to see if it does what it should do, that has its place, but creating and automatically executing unit tests for core functionality in add-ons.

This requires a clean setup and some thinking, and this repository should reflect this setup.

The name of the repo reflects that this is (sort of) intended as the next generation of the add-ons in the blenderaddons repo , although I do not intend to port everything over.

goals

folder structure

additional information

I have written some blog articles that provide some background information and show this repo in action:

installation

git clone https://github.com/varkenvarken/blenderaddons-ng.git cd ./blenderaddons-ng

Just open the folder in VS Code. It will probably prompt you, but you can also explicitly call Rebuild Container from the command palette and then start developing inside it.

requirements

The Dockerfile creates a Ubuntu 24.04 image with all binary dependencies for Blender installed. It also installs downloads and compiles the exact Python version that the current version of Blender comes bundled with (This may take some time because compiling is CPU-intensive.)

It does not install Blender, but installs the bpy module from Pypi. This allows us to run the add-ons headless. It also installs/upgrades the packages mentioned in requirements.txt , or, as is the case with numpy , downgrades it to the version bundled with Blender. Other notable packages it installs are:

workflow

  1. Copy add_ons/example_simple.py to add_ons/<new_name>.py

It provides a good starting point for simple add-ons

  1. Change the code to your liking

Don´t forget to update the bl_info dictionary and to define the OPERATOR_NAME variable (that is used in the test script)

  1. copy tests/test_example_simple.py to tests/test_<new_name>.py

And make sure to create good tests that ensure good coverage as well.

packages

if your add-on is not a single python file but a package, i.e. a folder containing an init .py file and perhaps other files and even subfolders, yuou will have to think of a few things things:

  1. packaging, i.e. creating a .zip file so you can install the thing in Blender.

    this is made simple but the Create packages task. Simply select Terminal -> Run task ... -> Create packages and every subfolder inside the add_ons subfolder will be zip into its own zip file inside the packages folder.

  2. bundling external pypi packages into your add-on.

    An example would be the vertexcolors add-on . It uses the blempy package and the easiest way to do this is

    cd add_ons/vertexcolors python3 -m pip install -t . blempy

    This will install the package into the local directory instead of into site-packages . It is possible to have your add-on use the pip module and install it at runtime into your Blender Python distro but quite frankly that is a pain, and this way far easier. Yes, it will mean your add-on will be a little bit bloated, but a couple of kilobytes never hurt nobody.

  3. testing

    which is just as uncomplicated as testing a single file add-on. Just check this test for an example.

profiling

The file example_simple.py contains code showing how to profile (parts of) an operator using the line-profiler package .

No profiling is done if the package isn´t available or if the LINE_PROFILE environment variable is not set to "1". To create a profile, simply run:

LINE_PROFILE=1 python3 add_ons/example_simple.py

It will produce output like:

Timer unit: 1e-09 s Total time: 8.615e-06 s File: /workspaces/blenderaddons-ng/add_ons/example_simple.py Function: do_execute at line 44 Line # Hits Time Per Hit % Time Line Contents ============================================================== 44 @profile # type: ignore (if line_profiler is available) 45 def do_execute(self, context: Context) -> None: 46 """Expensive part is moved out of the execute method to allow profiling. 47 48 Note that no profiling is done if line_profiler is not available or 49 if the environment variable `LINE_PROFILE` is not set to "1". 50 """ 51 1 1031.0 1031.0 12.0 obj: Object | None = context.active_object 52 1 7584.0 7584.0 88.0 obj.location.x += self.amount # type: ignore (because of the poll() method that ensures obj is not None)

Note: you cannot profile the execute() method directly, so you would typically factor out expensive code and profile just that. If you don´t, i.e. apply the @profile decorator directly to the execute() method, the register_class() function will complain:

ValueError: expected Operator, MESH_OT_select_colinear_edges class "execute" function to have 2 args, found 0

benchmarking

Profiling is not the same as benchmarking, of course, so support for the pytest-benchmark package was added.

The file test_example_simple.py provides an example benchmark, and all benchmarks are stored in the .benchmarks directory.

I have opted to put them in .gitignore because you wouldn't usually need to save them.

Benchmarks are excluded from the normal runs and are also not part of the automated workflow because they sometimes cause VS Code to hang. So, a VS Code task Run benchmarks is configured to run all benchmarks.

Comparing two runs is done on the command line:

pytest-benchmark compare 0001 0002

what about AI?

Brainrot warning

I have mixed feelings about AI. On the one hand, it can save time if you quickly want to cobble up some code, but on the other hand, if you don't have experience writing the code yourself, it is difficult to create a good prompt or review the generated code. Also, LLMs are quick in creating a unit test for a function, but in test-driven development, that is the wrong way around! And unit tests are not just about code that works; they also should test functionality, exceptions, or edge cases. But since it cannot guess the functional requirements (or 'intentions'), it tends to generate poor tests.

However, they can be used quite effectively to write a quick starter that you can then expand on. After all, it's often easier when you don't have to start from a blank slate. This is true for unit tests as well as new functions. Asking to write a function that does something specific is often quicker than looking it up and implementing the code from scratch. I documented my try at this in this blog article .

Another thing LLMs are good at is summarizing: you can quickly create docstrings or even a webpage the webpage for this repo was originally created with the help of GitHub CoPilot based on the readme. I still had to check the text, but it came formatted with HTML and CSS, saving a huge amount of time. But in the end I reworked the whole thing, included the logo, and basically had to rewrite everything in such a way that I can easily convert the markdown to html.

So, use AI or not? Yes, but sparingly, and verify the results! And please have a good look at the coding style: the LLM probably has seen tons of example code that illustrates a specific bit of functionality, but wasn't particularly focused on code quality. And yes, that is a friendly way to say it often produces beginner code that will be hard to maintain. It also tends to be liberal with comments that describe what the code is doing (which can already be known by looking at the code) instead of describing its intent or providing references (which can be useful when you encounter logic bugs). So yeah, call its code unpythonic if you like 😃