Apple HIG Skills for Agents: Every Apple Design Rule, Distilled for AI Context Efficiency

By Justin Wetch

GITHUB REPO: github.com/justinwetch/HIGAgentSkills

Ask your coding agent to follow Apple's Human Interface Guidelines and watch what happens. It'll give you something that sounds right. Confident language, plausible-sounding specs, reasonable defaults. But the measurements will be wrong. The platform distinctions will be invented. The specific rules, the ones that actually matter when you're shipping an app, will be hallucinated with the same fluency as everything else.

This isn't the model's fault. The Apple HIG is over 600,000 words of prose spread across hundreds of pages. No agent can load all of that into a context window, so it does what language models do when they don't have the source material: it guesses. And because it's been trained on millions of conversations where people discussed Apple design in general terms, the guesses are convincing. That's worse than obviously wrong. You ship a button that's 38pt instead of 44pt and nobody catches it until someone actually checks.

I built HIG Agent Skills to fix this. It's a skill containing 150 distilled reference files, one per HIG topic, that preserve every specific rule, measurement, API name, and platform distinction from the original guidelines while fitting within practical context budgets. The full corpus is around 130k tokens, but the loading protocol is tiered so a typical query only pulls 22,000 to 37,000 tokens of exactly the files it needs.

The distillation targeted a 75% reduction in word count per topic. What got cut was pedagogical scaffolding, introductory framing, and prose that restated rules already expressed elsewhere. Perfect for humans, unnecessary for agents. What got kept was everything you'd actually need to answer a design question correctly as an agent: the specific pixel values, the do/don't directives, the per-platform behavioral differences that make iOS and macOS feel like themselves.

The skill works through a routing index that maps 751 trigger keywords to their corresponding reference files across four tiers. Foundation files (color, typography, layout, accessibility, and the other building blocks) load on every invocation. Platform overviews load based on the detected target. Component and pattern files load on keyword match. And each file has a related field in its frontmatter, so the system expands one hop outward to catch adjacent guidelines the agent might need.

The result is an agent that can cite actual Apple specifications instead of reconstructing them from vibes. Your agent asked to design a tab bar for iOS now loads the real rules: icon sizes, label behavior, badge placement, the specific adaptations for iPad versus iPhone. Not a plausible-sounding approximation. The actual guidelines.

I've been using a version of this in my own Apple platform work (I built and shipped Natural Photo, an iOS camera app, and Artifex Viewer, an AR sculpture viewer, through the full App Store lifecycle). The difference between an agent that has the real HIG loaded and one that's improvising is the difference between getting useful design guidance and getting something you have to fact-check line by line.

The repo is at github.com/justinwetch/HIGAgentSkills. Drop it into your agent's skills path and it triggers automatically on Apple platform design queries. The README has the full technical details on the loading protocol and file structure.

Next
Next

HandType: Sculpting Letterforms With Your Hands [Free+Open Source]