← Back to blog

How to Publish Your First OpenClaw Skill to the Marketplace

Publishing an OpenClaw skill sounds intimidating if you've never done it. It's actually straightforward once you understand the structure. I published my first skill (a simple URL shortener integration) in about four hours, and most of that was figuring out things the docs don't explain well. Here's the streamlined version.

## What is an OpenClaw skill?

A skill is a packaged capability that any OpenClaw agent can use. It's a directory with a manifest file, one or more handler functions, and optionally some configuration. Think of it like an npm package, but for agent capabilities instead of code libraries.

Skills can do almost anything: call APIs, process files, interact with databases, control devices. The marketplace has skills ranging from "send a Slack message" to "manage a Kubernetes cluster."

## The skill directory structure

Every skill follows this layout:

``` my-skill/ manifest.yaml handlers/ main.ts tests/ main.test.ts README.md ```

That's the minimum. Larger skills might have multiple handlers, a `config/` directory, and a `schemas/` directory for input/output type definitions.

## Writing the manifest

The manifest file is where most people get stuck, not because it's complex, but because the field names aren't obvious. Here's a real example:

```yaml name: url-shortener version: 1.0.0 description: Shorten URLs using the Dub.co API author: your-username

capabilities: - name: shorten_url description: Takes a long URL and returns a shortened version handler: handlers/main.ts#shortenUrl input: type: object properties: url: type: string description: The URL to shorten custom_slug: type: string description: Optional custom slug for the short URL required: - url output: type: object properties: short_url: type: string original_url: type: string

permissions: network: - https://api.dub.co/*

secrets: - name: DUB_API_KEY description: API key for Dub.co required: true ```

A few things worth noting:

The `capabilities` array defines what your skill can do. Each capability is a function the agent can call. The `description` field on each capability matters more than you think. This is what the agent reads when deciding whether to use your skill. Be specific. "Shortens URLs" is fine. "Handles URL stuff" is not.

The `permissions.network` array declares which external hosts your skill will contact. During marketplace review, any network calls not listed here will flag your submission. Be explicit.

The `secrets` array declares what credentials your skill needs. These get injected at runtime through ClawCoil. Never hardcode API keys in your handler.

## Writing the handler

Handlers are TypeScript functions that receive structured input and return structured output:

```typescript import type { SkillContext } from '@openclaw/skill-sdk'

export async function shortenUrl( input: { url: string; custom_slug?: string }, ctx: SkillContext ) { const apiKey = ctx.secrets.get('DUB_API_KEY') if (!apiKey) { return { error: 'DUB_API_KEY not configured' } }

const response = await fetch('https://api.dub.co/links', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ url: input.url, slug: input.custom_slug, }), })

if (!response.ok) { const err = await response.text() return { error: `Dub API error: ${response.status} ${err}` } }

const data = await response.json() return { short_url: data.shortLink, original_url: input.url, } } ```

The `SkillContext` object gives you access to secrets, logging, and agent metadata. Use `ctx.secrets.get()` instead of `process.env`. Environment variables aren't available in the skill sandbox.

Return errors as objects, don't throw exceptions. Thrown exceptions get caught by the runtime and turned into generic error messages that aren't helpful to the agent or the user.

## Testing locally

OpenClaw provides a local test runner:

```bash openclaw skill test ./my-skill ```

This runs your skill in a sandboxed environment that mimics the production runtime. It reads your `tests/` directory and executes test files.

Write tests that cover the happy path and at least two error cases:

```typescript import { testSkill } from '@openclaw/skill-sdk/testing'

const skill = testSkill('./my-skill', { secrets: { DUB_API_KEY: 'test-key' }, })

test('shortens a valid URL', async () => { const result = await skill.call('shorten_url', { url: 'https://example.com/very/long/path', }) expect(result.short_url).toBeDefined() expect(result.original_url).toBe('https://example.com/very/long/path') })

test('returns error for missing API key', async () => { const noKeySkill = testSkill('./my-skill', { secrets: {} }) const result = await noKeySkill.call('shorten_url', { url: 'https://example.com', }) expect(result.error).toContain('DUB_API_KEY') }) ```

Don't skip testing. Failed tests are the number one reason submissions get rejected.

## Validating before submission

Run the linter before submitting:

```bash openclaw skill lint ./my-skill ```

This catches the common mistakes: missing required manifest fields, undeclared network permissions, secrets referenced in code but not in the manifest, and input/output schema mismatches. Fix everything it flags. The marketplace review runs the same checks automatically.

## Submitting to ClawVine

Once your skill passes local tests and linting:

```bash openclaw skill publish ./my-skill --registry clawvine ```

This packages your skill, uploads it, and creates a submission for review. You'll need a verified ClawVine account. If you don't have one, register at clawvine.com and complete identity verification first. The verification takes a day or two.

After submission, your skill enters the review queue. Reviews typically take 2-5 business days. The review checks for:

- Security: no data exfiltration, no excessive permissions, no obfuscated code - Quality: tests pass, error handling is reasonable, descriptions are accurate - Compatibility: works with the current OpenClaw version

If the review flags issues, you'll get specific feedback. Fix the problems and resubmit. Most first submissions need one round of revisions. That's normal.

## After publication

Once approved, your skill appears in the ClawVine marketplace. A few things to know:

Ratings matter. Early ratings heavily influence discovery ranking. Ask a few people to try your skill and leave honest reviews.

Keep it maintained. Skills that don't update for 6 months get flagged as potentially abandoned. If the underlying API changes and your skill breaks, users will report it, and unresolved reports hurt your trust score.

Version thoughtfully. Breaking changes require a major version bump. If you change the input schema, users' existing configurations will break. Add new capabilities instead of changing existing ones when possible.

The whole process, from empty directory to published skill, takes a weekend if you already know what you want to build. The hardest part is writing a clear description. Everything else is mechanical.

Related posts

Why Moltbook Failed and What Comes NextThe Best OpenClaw Skills in 2026: A Curated GuideAgent Marketplace Comparison: ClawVine vs ClawHub vs Independent ReposHow to Evaluate OpenClaw Skills Before Installing ThemThe 10 OpenClaw Skills Worth Installing Right Now