OpenClaw and Australian Professional Services: The Risks You Need to Know

19 February 2026 12 min read By Jam Cyber
OpenClaw AI security risks

If you have not heard of OpenClaw yet, you will. And if you run a business in Australia, you need to understand what it is and why it should concern you.

OpenClaw (formerly known as Clawdbot and Moltbot) is an open-source AI assistant that went viral in early 2026. Unlike ChatGPT or Claude, which run in the cloud, OpenClaw runs locally on your computer and connects to your messaging apps like WhatsApp, Telegram, and Slack.

The appeal is obvious. Tell it to check you in for a flight, clear your spam, and draft three client emails, and it will do all of that while you drink your coffee. It is powerful, flexible, and free. The risk is enormous.

What Makes OpenClaw Different

AI agent capabilities

Most AI tools answer questions or help you draft content. OpenClaw actually does work for you. It executes shell commands. It accesses your files. It controls your browser. It manages your calendar. It connects to over 100 services through integrations.

This is not ChatGPT sitting in a browser tab. This is an autonomous agent with full system access to your computer. For law firms, accounting practices, consultancies, and other businesses handling confidential client information, this creates a risk profile that most are not prepared for.

The Security Risks Are Real and Growing

1. Exposed Instances Everywhere

SecurityScorecard identified over 40,000 OpenClaw deployments exposed to the internet. 63% of observed deployments are vulnerable, with 12,812 exposed instances exploitable via remote code execution (RCE) attacks. Many of these exposed instances have no authentication protecting API keys, conversation histories, OAuth credentials, and months of private messages.

2. Malicious Skills in the Marketplace

OpenClaw marketplace risks

OpenClaw uses skills which are add-ons that extend what it can do. Anyone can publish a skill to ClawHub, the official skill repository.

Security researcher Paul McCarty discovered 386 malicious skills on ClawHub between February 1-3, 2026. These skills posed as cryptocurrency trading tools but delivered information-stealing malware targeting both macOS and Windows systems. One attacker posted skills that accumulated nearly 7,000 downloads.

3. The Shadow IT Problem

Token Security warned that about 22% of employees were using ClawdBot amongst its customers. When an employee installs OpenClaw on their work laptop and connects it to corporate email, Slack, or file storage, they have just created a highly privileged system operating outside your usual controls, visibility, and security frameworks.

4. Prompt Injection Attacks

Anyone who can message the agent is effectively granted the same permissions as the agent itself. Security firms call this the lethal trifecta: AI agents have access to private data, the ability to communicate externally, and exposure to untrusted content.

Why This Matters for Australian Businesses

Professional services cyber risk

Businesses hold particularly sensitive information. Client files. Financial records. Legal documents. Strategic plans. Privileged communications. You have obligations under the Privacy Act, professional conduct rules, and client confidentiality agreements.

When an employee connects OpenClaw to your systems, even with good intentions, several things happen:

  • You lose visibility. The AI agent operates outside your security monitoring. Your IT team cannot see what it is accessing or what it is doing with that data.
  • You expand your attack surface. Connecting corporate systems without realising the risk widens the organisation attack surface.
  • You create compliance gaps. Running uncontrolled AI agents with broad system access violates regulatory requirements.
  • You cannot control what it does with client data. If OpenClaw reads a client email and that information ends up in logs, you have potentially breached confidentiality obligations.

The ACSC Guidance Makes This Clear

ACSC Essential Eight framework

In January 2026, the Australian Cyber Security Centre released guidance on managing cyber security risks when adopting AI, developed in collaboration with New Zealand's NCSC and COSBOA. The guidance specifically highlights data leaks and privacy breaches as key risks when using cloud-based AI tools.

OpenClaw takes this risk further because it has full local system access and can connect to dozens of services simultaneously. Uploading client information to AI platforms can breach confidentiality obligations, violate privacy laws, and expose privileged communications.

What Australian Firms Should Do

1. Establish Clear AI Governance Policies

If you have not already, develop a written AI policy that defines:

  • Which AI tools are approved for business use
  • What data can and cannot be uploaded to AI platforms
  • Review processes for AI-generated content
  • Consequences for using unapproved tools with client data

2. Educate Your Team

Your staff need to understand why tools like OpenClaw create risk. Most people using it are not being malicious. Explain the risks clearly and provide approved alternatives that meet their productivity needs.

3. Implement Technical Controls

Technical security controls
  • Monitor for unauthorised AI agent deployments on company devices
  • Implement device binding to tie access to specific, approved devices
  • Block access to high-risk services at the network level where appropriate
  • Review OAuth permissions and revoke access for unapproved applications

4. Review Third-Party Risk

If you use external IT providers, offshore teams, or contractors, ensure they have equivalent security controls.

5. Stay Informed

Subscribe to security advisories from the ACSC and understand the security implications of any new AI tool before your team starts using it.

Final Thoughts

OpenClaw represents a shift from AI as a passive tool to AI as an active agent that reasons, decides, and acts on its own. For Australian businesses handling confidential client information, this shift requires a strategic response.

If you need help assessing your business AI risk posture or developing appropriate governance frameworks, Jam Cyber can help. We work with Australian businesses to implement security controls that protect client information while enabling teams to work efficiently with modern tools.

Protect your business from emerging AI threats.

Jam Cyber helps Australian businesses navigate the AI security landscape with practical governance frameworks and technical controls.

Book a Free Strategy Session