A Visual Studio Code project to develop, test, and profile Blender add-ons.
I would like to focus a bit more on automated testing of Blender add-ons. And I am not talking about interactively testing an add-on to see if it does what it should do, that has its place, but creating and automatically executing unit tests for core functionality in add-ons.
This requires a clean setup and some thinking, and this repository should reflect this setup.
The name of the repo reflects that this is (sort of) intended as the
next generation
of the add-ons in the
blenderaddons repo
, although I do not intend to port everything over.
With a folder structure that allows us to maintain multiple add-ons in the same repo
so that we can easily change Blender and Python versions.
Note that we do not install the Blender application; the goal is to use the stand-alone
bpy
module to automate testing and profiling. Also, although DevContainer sounds VS Code-specific, it isn´t, as we actually work based on a Dockerfile that can be used to build and deploy independently of VS Code.
Both with
pytest
and
pytest-coverage
in a way we can automate with GitHub actions as well. The configuration that is currently implemented works both with VSCode test discovery as well as in
the automated test workflow
in GitHub actions.
With the
line_profiler
package. See
below
for details.
With the
pytest-benchmark
package. See
below
for details.
All add-on code lives in
/add-ons
If the add-on is a package (i.e., contains an init .py file), it should be in its own subfolder
(A .zip file will be created in the toplevel directory
/packages
for every subfolder when the task create packages is executed)
tests are located in the
/tests
folder
They should have a name starting with
test_
and should contain
pytest
discoverable tests
I have written some blog articles that provide some background information and show this repo in action:
git clone https://github.com/varkenvarken/blenderaddons-ng.git
cd ./blenderaddons-ng
Just open the folder in VS Code. It will probably prompt you, but you can also explicitly call Rebuild Container from the command palette and then start developing inside it.
The Dockerfile creates a Ubuntu 24.04 image with all binary dependencies for Blender installed. It also installs downloads and compiles the exact Python version that the current version of Blender comes bundled with (This may take some time because compiling is CPU-intensive.)
It does
not
install Blender, but installs the
bpy
module from Pypi. This allows us to run the add-ons headless.
It also installs/upgrades the packages mentioned in
requirements.txt
, or, as is the case with
numpy
,
downgrades
it to the version bundled with Blender. Other notable packages it installs are:
fake-bpy-module
so that we get nice type checking in vscode, but see pyproject.toml file because we needed to
disable the invalid type form warning. This is because the pylance/pyright people insist that the Blender implementation
of properties is wrong, which it isn´t: They seem to forget that annotations are for more than type hinting.
See
here for more info
.
If you are annoyed by this, you can disable typechecking altogether by adding
typeCheckingMode = "off"
to your pyproject.toml file
pytest-cov for creating test coverage reports; pytest itself is installed by VS Code when creating the container
pytest-benchmark for creating benchmarks (See: benchmarking )
line_profiler to provide line-by-line profiling (See: profiling )
It provides a good starting point for simple add-ons
Don´t forget to update the
bl_info
dictionary and to define the
OPERATOR_NAME
variable (that is used in the test script)
And make sure to create good tests that ensure good coverage as well.
if your add-on is not a single python file but a package, i.e. a folder containing an init .py file and perhaps other files and even subfolders, yuou will have to think of a few things things:
packaging, i.e. creating a .zip file so you can install the thing in Blender.
this is made simple but the
Create packages
task. Simply select
Terminal -> Run task ... -> Create packages
and every subfolder inside the add_ons subfolder will be zip into its own zip file inside the
packages
folder.
bundling external
pypi
packages into your add-on.
An example would be the vertexcolors add-on . It uses the blempy package and the easiest way to do this is
cd add_ons/vertexcolors
python3 -m pip install -t . blempy
This will install the package into the local directory instead of into
site-packages
. It is possible to have your add-on use the
pip
module and install it at runtime into your Blender Python distro but quite frankly that is a pain, and this way far easier. Yes, it will mean your add-on will be a little bit bloated, but a couple of kilobytes never hurt nobody.
testing
which is just as uncomplicated as testing a single file add-on. Just check this test for an example.
The file example_simple.py contains code showing how to profile (parts of) an operator using the line-profiler package .
No profiling is done if the package isn´t available or if the LINE_PROFILE environment variable is not set to "1". To create a profile, simply run:
LINE_PROFILE=1 python3 add_ons/example_simple.py
It will produce output like:
Timer unit: 1e-09 s
Total time: 8.615e-06 s
File: /workspaces/blenderaddons-ng/add_ons/example_simple.py
Function: do_execute at line 44
Line # Hits Time Per Hit % Time Line Contents
==============================================================
44 @profile # type: ignore (if line_profiler is available)
45 def do_execute(self, context: Context) -> None:
46 """Expensive part is moved out of the execute method to allow profiling.
47
48 Note that no profiling is done if line_profiler is not available or
49 if the environment variable `LINE_PROFILE` is not set to "1".
50 """
51 1 1031.0 1031.0 12.0 obj: Object | None = context.active_object
52 1 7584.0 7584.0 88.0 obj.location.x += self.amount # type: ignore (because of the poll() method that ensures obj is not None)
Note: you cannot profile the
execute()
method directly, so you would typically factor out expensive code and profile just that. If you don´t, i.e. apply the
@profile
decorator directly to the
execute()
method, the
register_class()
function will complain:
ValueError: expected Operator, MESH_OT_select_colinear_edges class "execute" function to have 2 args, found 0
Profiling is not the same as benchmarking, of course, so support for the pytest-benchmark package was added.
The file
test_example_simple.py
provides an example benchmark, and all benchmarks are stored in the
.benchmarks
directory.
I have opted to put them in .gitignore because you wouldn't usually need to save them.
Benchmarks are excluded from the normal runs and are also not part of the
automated workflow
because they sometimes cause VS Code to hang. So, a VS Code task
Run benchmarks
is configured to run all benchmarks.
Comparing two runs is done on the command line:
pytest-benchmark compare 0001 0002
I have mixed feelings about AI. On the one hand, it can save time if you quickly want to cobble up some code, but on the other hand, if you don't have experience writing the code yourself, it is difficult to create a good prompt or review the generated code. Also, LLMs are quick in creating a unit test for a function, but in test-driven development, that is the wrong way around! And unit tests are not just about code that works; they also should test functionality, exceptions, or edge cases. But since it cannot guess the functional requirements (or 'intentions'), it tends to generate poor tests.
However, they can be used quite effectively to write a quick starter that you can then expand on. After all, it's often easier when you don't have to start from a blank slate. This is true for unit tests as well as new functions. Asking to write a function that does something specific is often quicker than looking it up and implementing the code from scratch. I documented my try at this in this blog article .
Another thing LLMs are good at is summarizing: you can quickly create docstrings or even a webpage the webpage for this repo was originally created with the help of GitHub CoPilot based on the readme. I still had to check the text, but it came formatted with HTML and CSS, saving a huge amount of time. But in the end I reworked the whole thing, included the logo, and basically had to rewrite everything in such a way that I can easily convert the markdown to html.
So, use AI or not? Yes, but sparingly, and verify the results! And please have a good look at the coding style: the LLM probably has seen tons of example code that illustrates a specific bit of functionality, but wasn't particularly focused on code quality. And yes, that is a friendly way to say it often produces beginner code that will be hard to maintain. It also tends to be liberal with comments that describe what the code is doing (which can already be known by looking at the code) instead of describing its intent or providing references (which can be useful when you encounter logic bugs). So yeah, call its code unpythonic if you like 😃