ClawHub has 13,729+ skills. Snyk’s ToxicSkills audit found 36.82% of them had security flaws. 76 were confirmed malicious. The ClawHavoc attack planted 2,400+ malicious skills that exfiltrated SSH keys and API tokens from anyone who installed them.
The barrier to publishing a skill on ClawHub: a SKILL.md file and a one-week-old GitHub account. That’s it.
Building your own custom skills solves 2 problems at once. You get exactly the functionality you need, tailored to your workflow. And you avoid the supply chain risk of running code from strangers on a system with access to your email, calendar, and CRM.
This guide covers the full OpenClaw custom skills development process — from the anatomy of a skill to testing, security vetting, and deployment.
What an OpenClaw Skill Actually Is
An OpenClaw skill is a self-contained module that extends the agent’s capabilities. At its core, a skill is a directory with a SKILL.md file that tells the agent what the skill does, when to use it, and what tools it provides. Think of it as a job description for a specific capability.
A minimal skill structure looks like this:
my-custom-skill/
├── SKILL.md # Skill definition and instructions
├── scripts/
│ └── main.py # The actual tool logic
└── README.md # Documentation (optional)
The SKILL.md is the interface between your skill and the OpenClaw agent. It contains:
- Name and description: What the skill does in plain language. The agent reads this to decide when to invoke the skill.
- Instructions: Step-by-step guidance for how the agent should use the skill, including input formats, expected outputs, and error handling.
- Tool definitions: The specific commands, scripts, or API calls the skill provides. Each tool has a name, description, parameters, and return type.
- Dependencies: Any external packages, API keys, or services the skill requires.
Building Your First Custom Skill: A Practical Example
Let’s build a skill that checks the status of a website and reports response time. Simple enough to understand, useful enough to actually deploy.
Step 1: Write the SKILL.md
# Website Health Check Skill
## Description
Checks the HTTP status and response time of a given URL.
Returns status code, response time in milliseconds, and
whether the site is currently reachable.
## Instructions
- Use this skill when the user asks about website status,
uptime, or response time.
- Always report the status code, response time, and a
human-readable summary.
- If the request times out after 10 seconds, report the
site as unreachable.
## Tools
- `check_website`: Accepts a URL, returns status code and
response time.
- Usage: Run `python3 scripts/check_website.py <url>`
- Returns: JSON with status_code, response_time_ms,
reachable (boolean)
Step 2: Write the Tool Script
import sys
import json
import time
import urllib.request
import urllib.error
def check_website(url):
try:
start = time.time()
req = urllib.request.Request(url, method="HEAD")
response = urllib.request.urlopen(req, timeout=10)
elapsed = round((time.time() - start) * 1000)
return {
"status_code": response.status,
"response_time_ms": elapsed,
"reachable": True
}
except urllib.error.HTTPError as e:
return {
"status_code": e.code,
"response_time_ms": None,
"reachable": True
}
except Exception:
return {
"status_code": None,
"response_time_ms": None,
"reachable": False
}
if __name__ == "__main__":
url = sys.argv[1] if len(sys.argv) > 1 else ""
if not url.startswith("http"):
url = "https://" + url
result = check_website(url)
print(json.dumps(result, indent=2))
Notice what this script doesn’t do: no network calls to third-party APIs, no writing to the filesystem, no reading environment variables for credentials. It takes a URL, makes a HEAD request, and returns structured data. That’s the security posture you want in a custom skill — minimal scope, minimal risk.
Step 3: Install the Skill
Place the skill directory in your OpenClaw skills folder. The exact path depends on your configuration, but typically:
cp -r my-custom-skill/ ~/.openclaw/skills/
Restart the OpenClaw agent. The skill is now available. You can verify by asking the agent: “Check the status of managemyclaw.com.” It should invoke the skill and return the response time and status code.
Security Vetting Your Own Skills
Just because you wrote it doesn’t mean it’s safe. Apply the same vetting process to your own skills that you’d apply to marketplace skills:
- No outbound calls to unknown endpoints. If your skill needs to reach an API, hardcode the domain. Don’t accept arbitrary URLs from the agent without validation.
- No filesystem writes outside the skill directory. A skill should read its own config and write to its own logs. Never write to
/tmp,/etc, or the home directory. - No credential handling. If your skill needs API keys, use environment variables or Composio OAuth — never hardcode credentials in the script and never accept them as command-line arguments (they’ll show up in process lists).
- No shell injection vectors. Never pass user input directly to
os.system()orsubprocess.run(shell=True). Use parameterized calls. - Minimal dependencies. Every pip package is a supply chain risk. The website health check above uses only the standard library. If you must use external packages, pin exact versions and audit the package.
On r/selfhosted, a thread about plugin security (178 upvotes) had this top comment: “If your self-hosted AI can run arbitrary code, you don’t have an AI assistant. You have a remote code execution vulnerability with a chat interface.”
Why this matters: Your custom skills run inside the same Docker container as your OpenClaw agent. If a skill has a vulnerability, the attacker inherits every permission your agent has — email access, calendar access, CRM access. Docker sandboxing limits the blast radius, but proper security hardening is the foundation.
Testing Skills Before Production
Never deploy a skill directly to your production agent. Test it in isolation first:
1. Unit test the script. Run the script directly with test inputs. Verify it handles edge cases — empty input, invalid URLs, timeouts, Unicode characters, extremely long strings.
2. Test with the agent in sandbox mode. Install the skill on a test OpenClaw instance (or your production instance with sandbox tools enabled) and ask the agent to use it. Watch for unexpected invocations — the agent might try to use your skill in contexts you didn’t anticipate.
3. Monitor token consumption. A poorly written SKILL.md can cause the agent to invoke the skill repeatedly or pass unnecessarily large contexts. Check that each invocation uses a reasonable number of tokens.
4. Check for permission conflicts. Make sure your skill’s permissions don’t conflict with your tool permission allowlists. If your agent has read-only email access, a skill that tries to send emails will fail — and you want to discover that in testing, not in production.
Skill Ideas for Common Business Workflows
| Skill | What It Does | Complexity |
|---|---|---|
| Website health check | Checks HTTP status and response time | Beginner |
| Invoice PDF generator | Creates PDF invoices from structured data | Intermediate |
| Competitor price checker | Scrapes public pricing pages on schedule | Intermediate |
| Database backup verifier | Checks that the latest backup is recent and valid | Intermediate |
| Custom CRM connector | Bridges your proprietary CRM API with OpenClaw | Advanced |
Start with the simplest skill that solves a real problem. A website health check skill that runs reliably is worth more than a sophisticated CRM connector that fails intermittently.
The Bottom Line
Custom skills are safer and more useful than marketplace skills — if you build them right. You control the code, the dependencies, and the permissions. You aren’t trusting a stranger’s SKILL.md file from an account that’s been on GitHub for 8 days. The tradeoff is time: building and testing a custom skill takes 1-3 hours. For a skill you’ll use daily across your workflows, that’s a worthwhile investment.
For the 13,729+ skills on ClawHub, treat them like npm packages: popular and well-maintained ones are probably fine, but vet everything before installing, and never run a skill you haven’t read the source code for.
Frequently Asked Questions
Can I publish my custom skill to ClawHub for others to use?
Yes. Publishing requires a SKILL.md file, a GitHub repository, and an account at least 1 week old. Before publishing, consider that your skill becomes part of the public supply chain — if it has a vulnerability, everyone who installs it is affected. Run the same security vetting checklist on your skill that you’d want others to run on theirs.
How many custom skills can I install on a single OpenClaw instance?
There’s no hard limit, but each skill adds context to the agent’s system prompt, which consumes tokens. At ManageMyClaw, we recommend 3-7 skills for Starter deployments and up to 15 for Business tier. Beyond 15, the context overhead starts affecting response time and cost. Focus on skills that map directly to your active workflows.
What programming languages can I use for OpenClaw skills?
Any language that can run as a command-line script and output structured data (JSON). Python is the most common because it’s pre-installed on most systems and has excellent standard library support. Bash, Node.js, Go, and Ruby all work. The SKILL.md file specifies the command to run — the agent doesn’t care what language it’s written in.
Do custom skills work with the tool permission allowlists?
Yes. Your tool permission configuration applies to custom skills the same way it applies to built-in tools. If your allowlist restricts the agent to read-only email access, a custom skill that attempts to send emails will be blocked. This is by design — the allowlist is the authority layer that governs all agent actions, regardless of where the skill came from.
Can ManageMyClaw build custom skills as part of the deployment?
Custom skill development is quoted separately from the standard deployment tiers. The Pro and Business tiers include installation and vetting of existing ClawHub skills (up to 7 and 15 respectively). If you need a custom integration — connecting to a proprietary CRM, building a custom reporting tool, or creating a workflow-specific skill — contact us for a quote.
Skills Installed and Vetted for You
ManageMyClaw includes ClawHub skill vetting on every deployment — every plugin checked against ClawHavoc-style attacks before installation. Starting at $499.
Get Started — No Call Required


