Back to Blog
Claude CodeSkillsDeveloper ProductivityAI Workflow

I Stopped Repeating Myself and Built Claude Code Skills Instead

Josip Budalić

Founder & CEO

||11 min read

I had a problem. Every single time I opened Claude Code, I was typing the same instructions. “Use TypeScript strict mode. Follow our naming conventions. Don't add comments unless the logic is genuinely confusing.” Over and over. For months. Like some kind of cursed copy-paste ritual.

Then I found Claude Code skills. They're a relatively new feature, and once I tried them, the copy-paste ritual stopped immediately. Skills are these little instruction files that teach Claude how to do specific things - and they stick around across sessions. You write the instructions once, and Claude just... knows. No more repeating yourself.

But here's the thing I didn't expect: skills aren't just about eliminating repetition. That's the entry point, sure. Once I started building my own, I realized they can fundamentally change how you work with AI. Like, the gap between “generic AI output” and “this actually looks like a senior dev wrote it” - skills are what close that gap.

The repetitive stuff (where most people start)

Let me give you a concrete example. We do a lot of Next.js work at HOTFIX. Every time we needed a new API endpoint, I was walking Claude through the same steps: create the route handler, add Zod validation, write error responses, generate tests, update the docs. Every single time. And if I forgot to mention one of those steps, I'd get an endpoint missing half the pieces.

So I wrote a skill. Took maybe 15 minutes. It's a SKILL.md file in my project's .claude/skills/ folder that turns all those steps into a single command. Now I type /create-endpoint orders and Claude does the whole thing. Route handler, validation, tests, docs - all of it, every time, without me babysitting.

I did the same for scaffolding new components, generating database migrations, and spinning up test suites for existing modules. Each one is a tiny skill file that automates a multi-step task I used to walk Claude through manually.

What a basic skill looks like

It's embarrassingly simple. A SKILL.md file with some YAML frontmatter and markdown instructions. Here's one I use to scaffold new API endpoints:

---
name: create-endpoint
description: Scaffold a new API endpoint with validation,
  error handling, and tests. Use when building new API routes.
disable-model-invocation: true
---

Create a new API endpoint for $ARGUMENTS:

1. Create the route handler in app/api/
2. Add Zod schema for request validation
3. Include proper error responses (400, 401, 404, 500)
4. Write integration tests covering happy path + errors
5. Add the endpoint to our API docs in docs/api.md

Follow the patterns in app/api/users/route.ts as reference.

That's it. Now I type /create-endpoint orders and Claude scaffolds the whole thing - route handler, validation, tests, docs. The $ARGUMENTS placeholder gets replaced with whatever I pass in. One command, consistent output every time.

ServiceNow reportedly cut their seller preparation time by 95% after rolling out Claude across 29,000 employees. I can't claim numbers that dramatic, but I can tell you that eliminating repetitive instructions across our team probably saves us 30-40 minutes per developer per day. That adds up fast.

Beyond repetition - this is where it gets interesting

Once you get the hang of writing skills, you start seeing opportunities everywhere. And not just for your own little shortcuts.

Take Anthropic's own frontend-design skill. This one blew my mind when I first tried it. You know how AI-generated UIs all look the same? Inter font, purple gradient, white background, minimal everything. It's that “AI slop” aesthetic you can spot from a mile away.

The frontend-design skill fixes that. It teaches Claude to make bold design decisions instead of safe, generic ones. It covers typography (stop using Inter for everything, try Bricolage Grotesque or IBM Plex), color theory (commit to a dominant color, use sharp accents), motion (actually animate things - staggered reveals on page load, scroll-triggered effects), and backgrounds (layered gradients, geometric patterns, not just solid white).

I installed it on a Friday afternoon and rebuilt a landing page I'd been unhappy with. The difference was... significant. Same prompt, same requirements, but the output had actual personality. A dark theme with purpose, typography that felt editorial, subtle animations that made the page feel alive.

My designer colleague looked at it and said “that doesn't look AI-generated.” That's the whole point.

And it's not just frontend stuff. The community has gone wild with this. There are skills for everything - SRE workflows that generate Prometheus configs and runbooks, skills that coordinate across multiple MCP servers, even a codebase-visualizer skill that generates interactive HTML maps of your project structure. One dev wrote a skill that analyzes his personal design patterns across projects and codified his own aesthetic into a reusable set of instructions.

The part where I messed up (and what I learned)

My first few skills were terrible. I crammed everything into a single massive SKILL.md - coding conventions, deployment procedures, testing patterns, review checklists - all in one file. It was like 2,000 lines long. Claude would load it and then get confused about which parts applied to what I was actually doing.

Sound familiar? It's the same context overload problem I wrote about in my context management post. More isn't better. You need focused, specific skills that do one thing well.

I also made the mistake of not writing good descriptions. The description in the frontmatter is how Claude decides when to load a skill automatically. If it's vague, the skill either triggers when it shouldn't or doesn't trigger when it should. I had a skill called “code-helper” with the description “helps with code.” Super useful, right? Claude was loading it on basically every prompt.

How to write a skill that actually works

Anthropic published a comprehensive guide to building skills that's worth reading end to end. But after building probably 20+ skills at this point, here's what I've boiled it down to.

Start with your use cases, not the code

Before you write a single line of YAML, write down 2-3 specific things you want the skill to help with. “I want Claude to generate API endpoints that follow our REST conventions” is good. “I want Claude to be better at coding” is useless. If you can't describe a concrete scenario, you don't have a skill yet - you have a wish.

The description is more important than the instructions

I know that sounds backwards. But the description determines when your skill gets used, and a skill that never triggers is a skill that doesn't exist. Include the actual phrases someone would say. If your skill helps with database migrations, put “database migration” and “schema change” in the description. Claude matches on these.

Keep SKILL.md under 500 lines

This is straight from Anthropic's guide and I wish I'd known it earlier. If your skill needs more detail, use supporting files - put reference docs, templates, and examples in the same directory and link to them from your SKILL.md. Claude loads the main file first and pulls in the rest only when it needs them.

my-skill/
├── SKILL.md           # Main instructions (keep focused)
├── reference.md       # Detailed API docs
├── examples/
│   └── sample.md      # Example output
└── scripts/
    └── validate.sh    # Script Claude can run
  • Be specific with instructions. “Write good tests” tells Claude nothing. “Use describe/it blocks, mock external dependencies with vi.mock, assert on both happy path and error cases” tells Claude everything.
  • Include error handling. What should Claude do when it hits a common problem? Document the gotchas. If your deployment skill might fail because of a missing env variable, say so.
  • Use disable-model-invocation: true for dangerous stuff. Deploy skills, database migration skills, anything with side effects - make sure only you can trigger it. You don't want Claude deciding your code “looks ready” and deploying to production.
  • Test in three layers. Does it trigger when it should? Does it produce the right output? Is that output actually better than what you'd get without the skill? If you can't answer yes to all three, keep iterating.

The advanced stuff (once you're hooked)

Skills have some features that aren't obvious at first. The ones I use most:

Dynamic context injection. You can embed shell commands in your skill with the !`command` syntax. The command runs before Claude sees the prompt, and the output gets inserted. I have a PR review skill that automatically pulls the diff, comments, and changed file list before Claude even starts reviewing. No manual copy-pasting.

Subagent execution. Add context: fork to your frontmatter and the skill runs in an isolated context. Perfect for research tasks where you don't want the results polluting your main conversation. I use this for a “deep-research” skill that explores codebases without cluttering my working context.

Arguments. Skills accept arguments through $ARGUMENTS placeholders. My fix-issue skill takes a GitHub issue number: /fix-issue 423. Claude reads the issue, understands the requirements, implements the fix, writes tests, and creates a commit. One command.

This connects to specs, by the way

If you've read my post on Spec-Driven Development, skills are the natural next step. SDD gives you the “what to build” framework. Skills give you the “how to build it” framework. Together, they're the closest I've gotten to Claude consistently producing code that I don't have to rewrite.

A spec defines the feature. A skill defines the standards. Claude follows both. It's like having a senior developer who never forgets the team's conventions and never gets tired of following them.

Where to put your skills (it matters)

Skills can live in three places, and picking the right one trips people up:

  • ~/.claude/skills/ - Personal skills. Available in every project. I put my general preferences here - commit message style, code review checklist, my debugging workflow.
  • .claude/skills/ - Project skills. Commit these to git. Your whole team gets them. Project-specific conventions, deployment procedures, API patterns.
  • Plugins - Shareable skill packs. The frontend-design skill is distributed as a plugin. You install it once, it works everywhere.

Project skills are what changed things for our team. We committed our skills to the repo, and suddenly every developer (and every Claude session) was working from the same playbook. No more “well, on my machine Claude does it differently.” The skills are version-controlled, reviewed in PRs, and evolve with the project.

Plot twist: skills aren't just for Claude

Here's something I didn't realize until recently, and it completely changed how I think about investing time into writing skills: they're not locked to Claude Code. Skills follow an open format - a simple, standardized structure that any AI coding tool can support. And many already do.

Cursor supports them. OpenAI's Codex supports them. OpenCode supports them. And the list keeps growing. That means the skills you write today aren't tied to a single vendor. They're portable. You can switch tools, try new ones, or use different agents for different tasks - and your skills come with you.

Think about what that means for a second. Every skill you build is an investment that pays off regardless of which AI tool you're using six months from now. Your API endpoint scaffolding skill, your PR review workflow, your testing conventions - all of it transfers. Write once, use everywhere.

This is a big deal because the AI coding tool landscape is moving fast. New tools pop up constantly, each with their own strengths. Having your workflows locked into one tool's proprietary format would be a nightmare. The open skills format means you can evaluate new tools without starting from scratch every time.

It also means teams can standardize on skills as their source of truth for conventions and workflows, regardless of whether individual developers prefer Claude Code, Cursor, or something else entirely. One set of skills, many tools. That's how it should work.

For us at HOTFIX, this was the final push to go all-in on skills. We're not betting on a single tool - we're betting on the format. And since the format is open and straightforward (just markdown files with YAML frontmatter), there's zero risk of it becoming obsolete. Even if every AI tool disappeared tomorrow, you'd still have a well-documented set of team conventions and workflows. That has value on its own.

So should you bother?

If you're using Claude Code more than a couple times a week, yes. Absolutely. Start with the annoying repetitive stuff - whatever you keep explaining over and over. That's your first skill. It'll take you 15 minutes, maybe 30 if you're being thorough.

Then install the frontend-design skill and see what's possible when you give Claude real design taste instead of letting it default to the same boring template every time.

And when you're ready to go deeper, read Anthropic's complete guide. It covers everything from skill architecture to testing strategies to distribution. It's the kind of documentation I wish existed when I started - would've saved me from my 2,000-line monstrosity.

The bottom line is this: Claude is only as good as the instructions you give it. Skills let you give those instructions once and get the benefit forever. The 15 minutes you spend writing a skill today will save you hours over the next month. And the quality of Claude's output? Night and day.

JB

Josip Budalić

Founder & CEO

Josip runs HOTFIX d.o.o., a dev shop based in Croatia. He's been writing code for over a decade and is slightly obsessed with finding ways to ship faster without sacrificing quality. When not arguing with AI assistants, he's probably hiking somewhere or consuming unhealthy amounts of coffee.

Want skills built for your team?

We build custom Claude Code skills and AI workflows for dev teams. If you're tired of inconsistent AI output across your team, we can help standardize it.