Agentic AI workers—such as ChatGPT Codex, Google Antigravity, and Claude Code—are rapidly being deployed across the enterprise. Unlike traditional software, these agents operate with the same privileges as an interactive user and can, create, compile, and execute code in pursuit of a goal.
This shift introduces a fundamental challenge: these workers are not just tools, but autonomous operators. They will reason, persist, adapt, and attempt to work around constraints to complete their objectives.
The Airlock Digital allowlisting strategy already governs much of this activity by enforcing strict control over code execution—blocking unapproved applications, restricting dynamically introduced tooling, and hardening the environments Agentic workers operate in (such as PowerShell through constrained language mode). However, controlling execution alone is not sufficient in this context.
Enterprises must now define and restrict the capabilities of AI workers to ensure they remain within their intended scope and it is important that these controls are implemented at the endpoint. Ultimately these Agentic AI workers must be treated the same as employees from a security perspective, with ideally a defined 'job role' and capabilities.
Relying on signatures to prevent agentic AI activity is too brittle. Autonomous agents are goal-driven and persistent; if one path is blocked, they will continue looking for alternative ways to complete the task.
The example below demonstrates this behavior. In this scenario, the objective is to prevent winget from being used to automatically install unauthorized applications.
In the video above, the agent is instructed to install Sumatra PDF on a Windows endpoint using Codex. The endpoint is protected by the Airlock Digital Client Agent running in Enforcement Mode, allowing only trusted software to execute.
During the session, the Codex agent attempts several different approaches:
It attempts to download and install the application using winget.exe, which is blocked by policy. [Prevented by Blocklist Rule]
It attempts to spawn a new PowerShell session and run winget again. [Prevented by Blocklist Rule]
It successfully downloads the Sumatra PDF installer directly from the product website.
It attempts to execute the installer from PowerShell in two different ways. [Blocked by Default Deny]
It attempts to remove the Mark of the Web from the downloaded file and execute it again. [Blocked by Default Deny]
It attempts once more to execute the installer directly from PowerShell. [Blocked by Default Deny]
It downloads the portable version of the application and side-loads it into the user profile.
When the user attempts to run the side-loaded application, execution is blocked. [Blocked by Default Deny]
This sequence highlights the challenge of relying on static signatures or narrowly defined controls. Much like a determined attacker, an autonomous agent will continue to adapt its approach when individual techniques are blocked.
The more effective control point is not simply detecting or blocking a specific tool, command, or behavior. It is enforcing whether unauthorized code is allowed to execute at all. In this example, Airlock Digital’s default-deny enforcement prevents the agent from completing its goal, even after it changes tactics multiple times, however we can see the Agentic worker is still able to perform many actions not requiring traditional code execution, such as downloading the installers and sideloading the application which is not ideal.
By understanding how agentic workers operate, organizations can define a baseline set of approved capabilities and restrict everything else by default.
Rather than relying on brittle signatures, such as blocking winget only when it is launched by Codex, a stronger approach is to control which capabilities are allowed to execute on the endpoint in the first place. Any additional capability can then be reviewed, justified, and approved through the organization’s normal governance processes.
This default-deny model avoids the “whack-a-mole” problem of trying to block every possible tool, command, or workaround an agent may attempt. Instead, it creates a clear governance framework: agentic workers can only use capabilities that have been explicitly approved for that endpoint, user, or business function.
This shifts the control model away from reactive detection and toward proactive enforcement, helping ensure agentic AI activity remains aligned with organizational policy.
The example below shows a restricted agentic AI worker being instructed to access an SSH server for which the local user already has a key.
In the session, the agent is initially unable to connect to the SSH server while running inside its default sandbox. When the agent is allowed to operate outside the sandbox, it is still unable to start unapproved client processes, including ssh.exe and cmd.exe, which prevents it from successfully establishing the connection.
After multiple attempts, the agent eventually gives up and provides the user with the commands to run manually.
In this scenario, SSH has not been explicitly blocked. It simply has not been explicitly allowed. This is an important distinction. Under a default-deny model, capabilities are unavailable unless they have been approved.
Where this behavior is required for a legitimate business purpose, the ruleset can be modified to allow SSH under specific conditions, such as permitting connections only to an approved host. This supports the principle of least privilege while still enabling controlled use of agentic workers on the endpoint.
Although enterprise-licensed editions of AI platforms may provide some visibility and governance controls, organizations are still likely to encounter unauthorized agentic AI running on endpoints within their environment.
This is another form of Shadow IT. Users and developers who prefer specific agentic platforms may bring their own personally licensed, free, or unmanaged AI tools into the workplace (Bring Your Own AI). These tools often sit outside enterprise governance, creating visibility, compliance, and security gaps.
The good news is unauthorized agentic AI can be prevented from installing or executing on endpoints when Airlock Digital Application Control is running in enforcement mode. This includes AI browser extensions across Microsoft Edge, Google Chrome, and Mozilla Firefox.
A hardened agentic AI environment should consider controls such as:
The objective is not to block agentic AI entirely. The objective is to make its use intentional, governed, and constrained to approved business outcomes.
Agentic AI workers represent a major shift in how software operates inside the enterprise. These tools are no longer passive applications waiting for a user to click a button. They are autonomous operators that can reason, adapt, and act in pursuit of a goal.
This capability creates enormous productivity potential, but it also changes something fundamental about the threat landscape. The techniques demonstrated in this post — iterating through alternative execution paths, downloading portable binaries, removing Mark of the Web, side-loading applications — are not new. What is new is who is now capable of attempting them.
A typical user would never think to try assembly reflection to bypass a control, or cycle through multiple execution strategies when the first one fails. But an agentic AI worker will — automatically, persistently, and without hesitation. These agents effectively raise every user to the capability of a determined, technically skilled operator. The attack surface hasn't changed; the average skill level operating against it has.
This is why a preventative, default-deny approach matters more than ever. Define what is trusted, explicitly allow what is required, and deny everything else.
The endpoint remains the critical enforcement point. Whether the agentic worker is enterprise-approved or introduced through Shadow AI, the endpoint is where code executes, tools are launched, files are downloaded, and capabilities are used. Controlling that layer gives organizations a consistent enforcement boundary across both managed and unmanaged AI usage.
The allowlisting and default-deny model from Airlock Digital provides a strong foundation for this new challenge. By preventing unauthorized code execution and restricting unapproved capabilities, organizations can move away from reactive “whack-a-mole” detection and toward proactive governance of agentic AI workers.
In many ways, this is a return to a proven security principle: trust should be explicit, not assumed. As agentic AI becomes more capable, that principle becomes even more important.