Datawhale干货 作者:曾浩龙,Datawhale团队你有没有被开源项目的代码 "劝退" 过?想象一下这个场景 —— 你在 GitHub 上找到一个很厉害的开源项目,兴冲冲地 Clone 下来,打开一看:几十个文件夹、上百个文件,README 写了一大堆中文 / 英文,但你连 API 代码入口在哪都找不到。这篇文章,我手把手带你从零手搓一个 Agent Skill,让 AI 变成你的 "代码仓库百晓通",还能装到龙虾?里。整个过程配有保姆级教程,不管你是资深开发者还是刚接触编程的在校大学生,都能跟着做出来。Agent Skill:给 AI 装一个 "技能包"大家都玩过游戏,有时候,游戏里的角色一开始什么都不会,但你可以给它学技能 —— 学了 "治疗术" 就能回血,学了 "狂风绝息斩" 就能放大招。Agent Skill 就是这个意思。AI 大模型本身是个 "通才",什么都知道一点,但什么都不精。而 Agent Skill 就是你给它安装的 "专业技能包" —— 装上之后,它就能在特定领域变成专家。和我们平时用的 "提示词"(Prompt)有什么区别呢?一个 Agent Skill 本质上就是一个文件夹,里面放着 AI 的 "技能包":其中最关键的是 SKILL.md 文件,它包含两部分:元数据区(文件最上面的几行配置):告诉 AI "这个技能是干什么的、什么时候该用它"---name: skill-name # 技能名称description: 技能描述,明确功能与触发条件---正文区(下面的 Markdown 内容):详细的操作步骤、示例和规则 —— 相当于一本给 AI 看的操作手册怎么触发?两种方式:手动调用:在输入框里输入 /skill-name + 你的问题,主动触发自动调用:当你的问题和技能描述匹配时,Agent 会自动调用对应的 Skill放在哪?以 OpenCode 举例全局路径(所有项目通用):~/.config/opencode/skills/<name>/当前项目路径(仅当前项目生效):.opencode/skills/<name>/请参考:https://opencode.ai/docs/skills/放在哪?以 OpenClaw 举例全局路径(所有项目通用):~/.openclaw/skills/<name>/特定项目的 Skill(仅该项目生效):<project>/.agents/skills/Agent Skills 已经成为通用开放标准(Apache-2.0 License),不仅适用于 OpenCode,还兼容 OpenClaw、Claude Code、Cursor、Gemini CLI、OpenAI Codex 等各类 AI 编程工具。第一步:装好你的 AI 编程助手 —— OpenCode在动手做 Skill 之前,我们先准备好工具。本文使用的是 OpenCode —— 一款开源的 AI 编程 Agent,GitHub 上已有 12W+ Stars。在这里插为什么选 OpenCode?三个理由:完全开源,数据安全,无封号风险支持灵活切换大模型(Claude、GPT、Gemini、Kimi、GLM 等都行)对国内用户友好,网络配置简单,还能接入国产模型和本地部署的大模型1.1 安装 OpenCode(一行命令搞定)运行下面这行命令即可自动安装:curl -fsSL https://opencode.ai/install | bash安装完成后,配置一下环境变量,让系统能精准找到 OpenCode:vim ~/.bashrc# 在文件中添加这一行:export PATH=/home/ma-user/.opencode/bin:$PATH# 保存后退出 :wq!source ~/.bashrc在终端输入 opencode 就能进入界面了。按 Tab 键可以在 Plan(规划)和 Build(执行)两种 Agent 模式间切换。1.2 连接大模型(大脑)通过 /connect 和 /models 命令,你可以接入不同的大模型服务商。输入对应的 API Key 后即可选用特定版本的模型:零成本快速体验:OpenCode 还内置了 免费模型 OpenCode Zen,无需配置即可使用,包括 Big Pickle、Kimi K2.5、MiniMax M2.5、GLM-5 等。第二步:手搓"代码仓库百晓通"工具准备好了,接下来进入核心环节 —— 从零创建一个 "代码仓库问答专家" Skill。整体流程分四步:安装 skill-creator → 连接 DeepWiki → 创建 Skill → 调试完善。2.1 安装 "造技能的技能" —— skill-creator在动手造 Skill 之前,我们先装一个 "帮手"。skill-creator 是一个 "元技能" —— 就像 "教你做菜的菜谱",它是专门用来帮你创建其他 Skill 的元 Skill。通过对话引导你梳理需求,然后自动生成结构完整的 Skill 文件。在 OpenCode 的输入框里,一句话就能安装:帮我安装 Skill 到 ~/.config/opencode/skills/skill-creator,来源:https://github.com/openai/skills/tree/main/skills/.system/skill-creatorAgent 会自动处理下载和安装过程:如果自动安装失败,点这里看手动安装步骤下载 Skill 文件夹:从 https://github.com/openai/skills/tree/main/skills/.system/skill-creator 下载整个 skill-creator 文件夹到本地创建目标目录并复制文件:mkdir -p ~/.config/opencode/skills/skill-creatorcp -r /你的下载路径/skill-creator/* ~/.config/opencode/skills/skill-creator/2.2 连接 DeepWiki —— 让 AI 能 "阅读" 代码仓库要让 AI 回答代码仓库的问题,它得能 "理解" 代码仓库里的内容。这就需要用到 DeepWiki —— 它能把 GitHub 上的代码仓库变成 AI 可以查询的知识库。我们通过 MCP(可以理解为 AI 的 "USB 接口")把 DeepWiki 接入 OpenCode,这样 AI 就能随时高效查阅代码仓库了。DeepWiki MCP 提供了三个工具:运行下面的命令安装 DeepWiki MCP,根据提示输入内容并确认即可:opencode mcp add安装完成后,输入 /mcps 可以确认连接状态:2.3 用自然语言告诉 AI,你想要什么样的 Skill准备工作都做好了,现在正式开始创建 Skill。按下 Tab 切换到 Plan 模式,输入 /skills 调用 skill-creator。确认后输入框会自动填充 /skill-creator,然后你就可以用自然语言描述你想要的 Skill 了:别被下面这段 Prompt 吓到。 本质上,我们就是在告诉 AI 三件事:这个 Skill 是干什么的:做一个昇腾(Ascend)推理生态的代码仓库问答专家它需要覆盖哪些代码仓库:vllm、vllm-ascend、MindIE 系列、msmodelslim 等遇到不确定的情况怎么处理:主动问用户,不能瞎编输入的具体内容如下:/skill-creator 创建一个新的代码仓库智能问答 Skill —— code-repos-expert,并支持中文(Chinese)和英语(English)回答。Skill 内容得是英文。具体要求如下:<expert_level_skill>1 - 昇腾(Ascend)推理生态开源代码仓库智能问答专家:根据用户输入的内容精准推断出其潜在目标、意图,希望帮他完成什么任务/解答什么疑问,真正理解透用户的需求或问题。当用户提问涉及 vllm、vllm-ascend、MindIE-LLM、MindIE-SD、MindIE-Motor、MindIE-Turbo、modelslim 等 Ascend 推理生态开源项目的以下方面时:使用方法、部署流程、支持模型、支持特性、系统架构、配置管理、调试、测试、故障排查、性能优化、定制开发、源码解析,或任何其他关于 Ascend 推理项目的技术问题,使用此技能(Skill)。2 - 针对不同的开源代码仓库:根据用户输入的内容包含的代码仓名称,使用 deepwiki,使用方法:owner/repo + 用户输入的内容经过意图识别后生成的查询,优化基于上下文的代码仓库智能问答。3 - 无法判定代码仓库时,主动向用户询问、确认,禁止盲目猜测。此外,对于任何不确定、缺乏官方文档支持或基于推论的信息,必须明确标注"此信息可能存在不确定性"或类似提示。在关键或复杂处,建议用户进一步查阅相关官方文档或源码以获取最权威的指导。</expert_level_skill>涉及的开源代码仓库包括:# vLLM:高效、易用、低成本的大模型推理和服务框架https://github.com/vllm-project/vllm# vLLM Ascend:社区维护的硬件插件,让 vLLM 在昇腾 NPU 上高效运行https://github.com/vllm-project/vllm-ascend# MindIE-LLM:面向昇腾的大语言模型推理引擎https://gitcode.com/Ascend/MindIE-LLM# MindIE-SD:面向 Stable Diffusion 系列模型的推理引擎套件https://gitcode.com/Ascend/MindIE-SD# MindIE-Motor:高性能推理服务化框架https://gitcode.com/Ascend/MindIE-Motor# MindIE-Turbo:大语言模型推理加速插件库https://gitcode.com/Ascend/MindIE-Turbo# msmodelslim:大模型量化与压缩工具https://gitcode.com/Ascend/msmodelslimAgent 在 Plan 模式下会先想清楚该怎么做,列出详细计划:切换到 Build 模式,输入 "继续创建此 Skill",Agent 就会开始生成完整的 Skill 文件。运行完成后,新 Skill 就创建好了:最后,把生成的 Skill 复制到 OpenCode 的 skills 目录下就能用了:cp -r /root/code-repos-expert ~/.config/opencode/skills/2.4 拆解 SKILL.md —— 看看 Agent 的 "操作手册" 长什么样Skill 创建好了,我们打开核心文件 SKILL.md 看看里面都有什么。---name: code-repos-expertdescription: 昇腾(Ascend)推理生态开源代码仓库智能问答专家旨在为 vLLM、vLLM-Ascend、MindIE-LLM、MindIE-SD、MindIE-Motor、MindIE-Turbo 以及 msModelSlim (MindStudio-ModelSlim) 等仓库提供专家级且易于理解的解释。在处理昇腾(Ascend)推理生态相关项目的用户询问时,务必触发此技能(Skill),可解答使用方法、部署流程、支持模型、支持特性、系统架构、配置管理、调试、测试、故障排查、性能优化、定制开发、源码解析以及其他技术问题。支持中英文双语回复,并可借助 deepwiki MCP 工具检索仓库知识库,生成具备上下文感知且基于证据的回答。Ascend inference ecosystem open-source code repository intelligent question-and-answer (Q&A) expert. Provide expert-level yet comprehensible explanations for repositories such as vLLM, vLLM-Ascend, MindIE-LLM, MindIE-SD, MindIE-Motor, MindIE-Turbo, and msModelSlim (MindStudio-ModelSlim). Use this skill when addressing user inquiries related to these Ascend inference ecosystem projects, including topics such as usage, deployment process, supported models, supported features, system architecture, configuration management, debugging, testing, troubleshooting, performance optimization, custom development, source code analysis, and any other technical issues about these projects. Support responses in both Chinese and English. Use deepwiki MCP tools to query repository knowledge bases and generate context-aware, evidence-based responses.---# Code Repositories ExpertExpert-level intelligent question-and-answer (Q&A) support for open-source code repositories within the **Ascend inference ecosystem**. Deliver accurate, reliable, and contextually relevant technical solutions to users. Respond **in the same language as the user's input** (Chinese or English).## Overall Workflow### 1. Identify Intent**Understand the underlying intent**: Infer the actual technical requirements behind colloquial expressions and intricate queries. Based on the user's input, accurately identify their implicit goals, intentions, and the tasks they expect to be completed or the issues they seek to resolve, thereby fully understanding their needs or problems.| User Expression | Intent Category ||---|---|| "How to install?" / "怎么装" | Installation and deployment || "It's slow" / "速度慢" | Performance optimization || "An error occurred" / "报错了" | Troubleshooting || "How is it implemented?" / "怎么实现的" | Source code analysis || "What models are supported?" / "支持哪些模型" | Compatibility and features || "How to configure?" / "怎么配置" | Configuration management || User pastes error log / stack trace | Extract key error message as query keywords || User pastes code snippet | Identify module/file context, combine with intent |For **troubleshooting** and **deployment** intents, proactively request:- Hardware: Ascend chip model (e.g., 910B, 910C)- Software: Ascend HDK version, CANN version, Python version, torch and torch_npu version, transformers version, vLLM/MindIE version, triton-ascend version- OS: Linux distribution and kernel version- Error message or log snippet (if applicable)When the intent cannot be determined, **proactively ask the user** to obtain clearer and more explicit intent and contextual information.### 2. Route to Code RepositoryMatch relevant keywords to the appropriate repository. Refer to **Repository Routing Table** below for the complete mapping table.**Repository Routing Table**:| Keyword(s) in User Input | DeepWiki `repoName` | Notes ||---|---|---|| `vLLM` / `vllm` (without `ascend`) | `vllm-project/vllm` | Upstream vLLM engine || `vllm-ascend` / `vllm ascend` / `vLLM Ascend` / `vLLM-Ascend` | `vllm-project/vllm-ascend` | Must query `vllm-project/vllm` for upstream context first, then query `vllm-project/vllm-ascend` || `MindIE-LLM` / `MindIE LLM` / `mindie-llm` / `mindie llm` | `verylucky01/MindIE-LLM` | LLM inference engine for Ascend || `MindIE-SD` / `MindIE SD` / `mindie-sd` / `mindie sd` | `verylucky01/MindIE-SD` | Multimodal generative inference for Ascend || `MindIE-Motor` / `MindIE Motor` / `mindie-motor` / `mindie motor` | `verylucky01/MindIE-Motor` | Inference serving framework || `MindIE-Turbo` / `MindIE Turbo` / `mindie-turbo` / `mindie turbo` | `verylucky01/MindIE-Turbo` | NPU acceleration plugin for vLLM || `msmodelslim` / `modelslim` / `MindStudio-ModelSlim` | `verylucky01/MindStudio-ModelSlim` | Model compression and quantization toolkit for Ascend |#### vllm-ascend Special Handling`vllm-ascend` is a hardware plugin that decouples Ascend NPU integration from the vLLM core by using pluggable interfaces. **Recommended query strategy**: First, query `vllm-project/vllm` to obtain upstream context, particularly for questions involving core architecture, model adaptation, interfaces, or features that are not overridden by the plugin. Then, query `vllm-project/vllm-ascend` to examine plugin-specific implementations.1. Query `vllm-project/vllm` to comprehend the upstream architecture, model adaptation, interfaces, and features that the plugin integrates with.2. Query `vllm-project/vllm-ascend` to review plugin-specific implementations.3. Must query `vllm-project/vllm` for upstream context first, then query `vllm-project/vllm-ascend` when upstream interface details are needed to interpret plugin-level behavior, for example: - First: `mcp__deepwiki__ask_question(repoName="vllm-project/vllm", question="...")` - Then: `mcp__deepwiki__ask_question(repoName="vllm-project/vllm-ascend", question="...")`**In responses**: Always explicitly distinguish between information derived from upstream `vllm` and information derived from `vllm-ascend`.#### MindIE-Turbo Cross-Repo HandlingWhen questions involve MindIE-Turbo's integration with vLLM or vLLM-Ascend, query both repositories to provide complete context.#### Disambiguation Protocol- **Cannot determine repository**: Ask the user to clarify which project they are referring to. Never guess.- **Ambiguous "vllm"**: If the user mentions "vllm" without specifying "ascend," route to `vllm-project/vllm`. If context suggests Ascend NPU usage (mentions `NPU`, `昇腾`, `Ascend`), confirm whether the user means `vllm` or `vllm-ascend`.- **Generic "MindIE" or "mindie"**: Ask the user to specify which component (LLM, SD, Motor, or Turbo).- **Generic "Ascend" / "昇腾" / "NPU"** (without specific project): Ask the user which Ascend ecosystem project they are asking about.- **Cross-repo comparison questions** (e.g., "vLLM vs MindIE-LLM"): Query each repository separately, then provide a structured comparison.### 3. Construct Optimized QueriesRewrite colloquial questions as **precise English technical queries** optimized for DeepWiki retrieval- Formulate all questions in English- If the relevant topic area is unclear, first call `mcp__deepwiki__read_wiki_structure` to identify the appropriate documentation section- Use domain-specific technical terminology where applicable (e.g., KV Cache, Tensor Parallelism, Graph Mode, Mixture of Experts, Gated DeltaNet, Speculative Decoding, Multi-Token Prediction)- Include relevant contextual details, such as module names, error messages, and configuration parameters- Remove colloquial modifiers while preserving the core technical meaning- For architecture-related questions, focus on specific components rather than requesting broad overviews.- Decompose broad questions into multiple focused sub-questions to further improve retrieval precision**Examples by Intent Category**:| Category | User Input | Optimized Query ||----------|-----------|-----------------|| Usage | vllm-ascend 支持哪些模型 | What models are supported? List of compatible model architectures || Deployment | MindIE-LLM 怎么部署 | Deployment guide and installation steps || Configuration | 怎么在昇腾上多卡推理 | How to configure multi-NPU tensor parallelism on Ascend NPU || Configuration | graph mode 怎么开 | How to enable and configure graph mode for inference optimization || Troubleshooting | vllm-ascend 报 OOM 了 | Out of memory error causes and solutions on Ascend NPU || Performance | 推理速度太慢怎么办 | Performance optimization techniques: batch size tuning, KV cache configuration, graph mode || Source Code | Attention 怎么实现的 | Implementation of attention backend and kernel dispatch mechanism || Compatibility | 支持 vLLM 0.8 吗 | Version compatibility matrix and supported vLLM versions |### 4. Query DeepWiki#### DeepWiki Tool Usage PatternsUse the mapped `repoName` and refined `queries` derived from the user's identified intent.##### Single-repo querymcp__deepwiki__ask_question(repoName="<owner/repo>", question="")##### Explore repo structure firstmcp__deepwiki__read_wiki_structure(repoName="<owner/repo>")##### Read full repo documentationmcp__deepwiki__read_wiki_contents(repoName="<owner/repo>")**Note**: If a single query does not yield sufficient information, run multiple follow-up queries from different perspectives to **obtain more comprehensive and accurate results**.#### DeepWiki Tool Selection| Scenario | Recommended Tool ||----------|-----------------|| Known question direction, need specific answer | `mcp__deepwiki__ask_question` || Unsure which documentation section covers the question | `mcp__deepwiki__read_wiki_structure` first, then `ask_question` || Need comprehensive coverage of a module/topic | `mcp__deepwiki__read_wiki_contents` || Single query returns insufficient information | Multiple `ask_question` calls from different angles |#### Session Context ReuseIf the same repository topic has been queried earlier in the current conversation, prioritize reusing existing results. Only issue additional queries when new information is needed.#### Fallback Strategy- **No results returned**: Broaden the query or rephrase from a different angle. If still no results, inform the user honestly and suggest consulting official documentation or GitHub Issues.- **Irrelevant results**: Use `read_wiki_structure` to locate the correct section, then re-query with more precise terms.- **Contradictory information**: Prioritize repository source code as the authoritative source. Flag the contradiction and recommend the user verify independently.- **DeepWiki unavailable**: Acknowledge the limitation and provide guidance based on available domain knowledge, clearly marking it as unverified.### 5. Organize and Synthesize the ResponseIntegrate the results obtained from DeepWiki with relevant domain expertise. Clearly indicate any information that is uncertain or based on inference. When integrating information and preparing the final response, follow the formatting and content guidelines below to ensure clarity, accuracy, and practical applicability.#### 5a. Response Format- **Conclusion first**: Provide a concise summary of the core finding or solution, followed by detailed analysis, steps, or technical explanations- **Terminology**: All code snippets, file paths, configuration names, proper nouns, and technical terms must be presented accurately in their correct form- **Traceability**: Cite specific file paths, configuration options, or code snippets with their sources, so users can locate and verify the information- **vllm-ascend attribution**: When referring to vllm-ascend, explicitly distinguish between information from `vllm-ascend` and from upstream `vllm`#### 5b. Quality Requirements- **Accuracy**: All technical details must strictly conform to DeepWiki query results. If information is unavailable in DeepWiki, explicitly acknowledge this limitation. Never fabricate content.- **Completeness**: Cover all aspects of the user's question. Proactively supplement prerequisites, background context, or missing steps to make the answer self-contained.- **Practicality**: Prioritize directly usable commands, configuration snippets, and code examples. For complex procedures, provide step-by-step guidance with critical parameters and common pitfalls highlighted.- **Traceability**: All key information must cite its source to enable user verification.- **Clarity**: Use clear and accessible language. Avoid unnecessary jargon. Focus on technical accuracy while remaining approachable.## Prohibited Behaviors- Never fabricate technical details when DeepWiki returns no results- Never conflate information from different repositories (e.g., attributing vLLM features to vllm-ascend)- Never recommend unverified third-party solutions- Never answer without first confirming the target repository when it is ambiguous## Uncertainty MarkingFor any information that is uncertain, unsupported by official documentation or source code, or derived from inference, append the following disclaimer:- Chinese: "(此信息可能存在不确定性,建议查阅官方文档或源码确认)"- English: "(This information may be uncertain — please verify against official documentation or source code)"For complex or high-stakes topics, explicitly recommend consulting official documentation or source code for authoritative confirmation.## Scope BoundaryThis skill covers ONLY the following 7 open-source repositories: vLLM, vLLM-Ascend, MindIE-LLM, MindIE-SD, MindIE-Motor, MindIE-Turbo, msModelSlim.If the user's question falls outside this scope:- Clearly state the limitation- Do NOT answer using general knowledge without DeepWiki backing完整的 SKILL.md 内容可以在 GitHub 仓库(https://github.com/Agent-Skill-007/learn-agent-skills) 查看,这里挑最关键的几个设计点来讲。在实际调试中发现,skill-creator 生成的初版虽然结构完整,但对特定仓库的理解存在不足,需要结合《The Complete Guide to Building Skills for Claude》实践指南和领域知识进行修改完善。建议先用 Git 做版本控制,反复调试后再部署。最终版的 code-repos-expert Skill 遵循了 "意图理解 → 仓库路由 → 智能检索 → 整合回答" 的工作流,就像一个有经验的工程师:先弄清你想问什么,再去查对应的文档和代码,最后给出有据可依的回答。下面拆解四个关键设计点:设计点 1:意图识别表 —— 把大白话翻译成技术问题用户提问往往是口语化的(比如 "太慢了"、"报错了"),但要让 Agent 准确查阅代码仓库,需要将其转化为精准的技术问题。SKILL.md 里内置了一张映射表:设计点 2:仓库路由表 —— 根据关键词自动找到对应的代码仓库当用户提到不同的项目名称时,Agent 需要知道去哪个仓库查找。SKILL.md 里内置了一张路由表,实现从关键词到仓库的精准映射:Repository Routing Table:特别地,vllm-ascend 是 vLLM 的一个硬件插件,很多功能依赖上游 vLLM。所以在处理 vllm-ascend 相关问题时,AI 会采取 "双库协同"策略 —— 先查上游 vllm 获取整体架构信息,再查 vllm-ascend 获取昇腾特有的实现细节,并在回答中明确区分信息来源。设计点 3:消歧机制 —— 不确定就问,不瞎猜当用户的提问比较模糊时(比如只说了 "MindIE" 或 "vLLM"),Agent 不会瞎猜,而是会主动追问:只说 "MindIE"? → Agent 会问你:"请问您指的是 MindIE-LLM、MindIE-SD、MindIE-Motor 还是 MindIE-Turbo?"提到 "vllm" 但上下文涉及昇腾 / NPU?→ Agent 会确认:"您问的是 vllm 还是 vllm-ascend?"设计点 4:防幻觉机制 —— 不知道就说不知道SKILL.md 里设定了严格的 "底线":查不到信息时不编造,会诚实告诉你 "DeepWiki 中没有找到相关信息,建议查阅官方文档"对不确定的信息必须标注提示,如:(此信息可能存在不确定性,建议查阅官方文档或源码确认)不同仓库的信息不混淆Agent Skill 做好了,直接检验效果Agent Skill 做好了,动手玩的时刻到了 —— 用真实问题来检验它。用例 1:深入源码级的硬核提问输入:/code-repos-expert vllm-ascend 具体是怎么结合 vllm 来适配 Qwen3-Next 的?必须深入分析关键的模型 patch 和算子适配 patch,并重点关注 patch_triton 中的具体内容注意看结果 —— AI 不仅回答了问题,还自动区分了哪些信息来自 vLLM 上游、哪些来自 vllm-ascend 插件。这就是我们在 Skill 里设置的 "双库协同" 策略和信息溯源机制在起作用。用例 2:跨仓库关系梳理输入:/code-repos-expert MindIE-LLM、MindIE-SD、MindIE-Motor、MindIE-Turbo 这四者之间的关系?这类 "跨仓库关系" 的问题,正是 DeepWiki 很难回答的 —— 因为它只对单一仓库生成 Wiki 页面及问答。而本文实现的 Agent Skill 通过路由表和多仓库协同查询,能把散落在不同仓库中的信息整合起来,给出清晰的全景式回答。最后一步:把专属技能包装进龙虾前面的 Agent Skills 示例主要在 OpenCode 中演示,帮助大家快速理解 Skill 的基本结构与调用方式。2026 年初,开源项目 OpenClaw 在短短两个月内斩获 25 万 GitHub 星标,创下史上最快增长纪录。它本质上也是命令行界面(CLI)Agent,但通过 UI 界面和接入即时通讯(IM)、实现全天候主动交互,并依托开源生态的力量,成功将原本面向开发者的 AI Agent 带给了更广泛的用户。下面我们将简单介绍 OpenClaw 环境下 Agent Skill 的使用方法。本教程基于云服务器部署的 OpenClaw,本地部署可以参考 Datawhale 的安装教程:OpenClaw 免费小白安装教程来了!养成你的第一个龙虾 ?,两者在操作流程上基本没有差异。运行 OpenClaw在云服务器下的终端里直接和 OpenClaw 2026.3.2 交互,如下所示:安装 Agent Skill输入:安装好这个 Agent Skill:https://github.com/Agent-Skill-007/learn-agent-skills/tree/main/skills/code-repos-expert用例 1:深入源码级的硬核提问输入:用 code-repos-expert skill 分析 vllm-ascend 具体是怎么结合 vllm 来适配 Qwen3-Next 的?必须深入分析关键的模型 patch 和算子适配 patch,并重点关注 patch_triton 中的具体内容在 OpenClaw 中调用 Agent Skill 后,系统能够 自动定位相关核心模块,并展示关键源码片段,帮助开发者深入理解实现原理。分析过程不仅覆盖了 vLLM 主框架,还深入 vllm-ascend 适配层,最终生成一份结构清晰、细节详尽的技术分析报告,可提升对复杂代码仓库的理解效率。用例 2:跨仓库关系梳理输入:用 code-repos-expert skill 分析 MindIE-LLM、MindIE-SD、MindIE-Motor、MindIE-Turbo 这四者之间的关系?这类 "跨仓库关系" 的问题,正是 DeepWiki 很难回答的 —— 因为它只对单一仓库生成 Wiki 页面及问答。而本文实现的 Agent Skill 通过路由表和多仓库协同查询,能把散落在不同仓库中的信息整合起来,给出清晰的全景式回答。几点踩坑心得 + 你也可以做出自己的 Agent Skill制作 Skill 的核心心得1. 先跑通,再封装别一上来就写 Skill。正确的流程是:先在 OpenCode 对话里反复测试你的问题和提示词,找到效果最好的方式 / 解决了问题,再把它固化成 Skill。就像写论文先跑通实验,再写方法论。2. 简洁是力量大模型的上下文窗口是有限的(就像你的工作台面积有限),Skill 需要跟系统提示、对话历史共用这个空间。好的 Skill 只写两类东西:大模型不知道的领域信息 + 你希望它遵守的规则。3. 像改论文一样打磨第一版大概率不完美。用了几次后,你会发现哪些地方 AI 理解得不对。回去改 SKILL.md 就行 —— 甚至可以让 AI 自己分析 SKILL.md,帮你发现可优化的细节,类似论文润色。4. SKILL.md 的写作公式元数据区:写清楚 "做什么 + 什么时候用"正文区:当成一份给 Agent 看的操作手册,用祈使句一步一步写清楚该怎么做结构化工作流:把复杂任务拆成清晰的步骤,别写大段的抽象描述核心心法:Agent Skill 的本质是将专家知识和工作流程固化为 Agent 可稳定执行的模块。思考重点不在于 "怎么写代码",而在于 "如何清晰无歧义地表达流程与标准",以解决问题 / 完成任务为导向。你也可以做一个这套方法不只能用在昇腾(Ascend)推理开源代码仓库 ——你正在学机器学习 / 深度学习?可以做一个 scikit-learn / PyTorch 源码问答 Skill你在研究大语言模型?同样可以把相关仓库(如 transformers、llama.cpp 等)变成你的开源项目 “导师”你在研究大模型高效微调?同样可以把相关仓库(如 PEFT、LlamaFactory)变成你的开源项目 “导师”核心思路是大致一样的:选定代码仓 → 连接 DeepWiki → 用 skill-creator 生成 Skill → 调试优化欢迎感兴趣的小伙伴了解和体验这个 Agent Skill,你也可以根据自己关注或正在研习的开源代码仓进行定制。本文制作的 Skill + Skills 聚合平台与 Skills Repository 整理分享,存放在了:https://github.com/Agent-Skill-007/learn-agent-skills 欢迎来 Star ?!如果你有任何反馈、建议,或想一起交流学习 Agent Skills,欢迎加入 Datawhale Agent Skills 交流群。另外,我还在 ima.copilot 平台上创建了「Agent Skills 指南」知识库,在「发现」中搜索即可找到并使用。一起“点赞”三连↓