五度易链产业数字化管理平台
The Accidental Orchestrator

This is the first article in a series on agentic engineering and AI-driven development. Look for the next article on March 19 on O’Reilly Radar.There’s been a lot of hype about AI and software development, and it comes in two flavors. One says, “We’re all doomed, that tools like Claude Code will make software engineering obsolete within a year.” The other says, “Don’t worry, everything’s fine, AI is just another tool in the toolbox.” Neither is honest.I’ve spent over 20 years writing about software development for practitioners, covering everything from coding and architecture to project management and team dynamics. For the last two years I’ve been focused on AI, training developers to use these tools effectively, writing about what works and what doesn’t in books, articles, and reports. And I kept running into the same problem: I had yet to find anyone with a coherent answer for how experienced developers should actually work with these tools. There are plenty of tips and plenty of hype but very little structure, and very little you could practice, teach, critique, or improve.I’d been observing developers at work using AI with various levels of success, and I realized we need to start thinking about this as its own discipline. Andrej Karpathy, the former head of AI at Tesla and a founding member of OpenAI, recently proposed the term “agentic engineering” for disciplined development with AI agents, and others like Addy Osmani are getting on board. Osmani’s framing is that AI agents handle implementation but the human owns the architecture, reviews every diff, and tests relentlessly. I think that’s right.But I’ve spent a lot of the last two years teaching developers how to use tools like Claude Code, agent mode in Copilot, Cursor, and others, and what I keep hearing is that they already know they should be reviewing the AI’s output, maintaining the architecture, writing tests, keeping documentation current, and staying in control of the codebase. They know how to do it in theory. But they get stuck trying to apply it in practice: How do you actually review thousands of lines of AI-generated code? How do you keep the architecture coherent when you’re working across multiple AI tools over weeks? How do you know when the AI is confidently wrong? And it’s not just junior developers who are having trouble with agentic engineering. I’ve talked to senior engineers who struggle with the shift to agentic tools, and intermediate developers who take to it naturally. The difference isn’t necessarily the years of experience; it’s whether they’ve figured out an effective and structured way to work with AI coding tools. That gap between knowing what developers should be doing with agentic engineering and knowing how to integrate it into their day-to-day work is a real source of anxiety for a lot of engineers right now. That’s the gap this series is trying to fill.Despite what much of the hype about agentic engineering is telling you, this kind of development doesn’t eliminate the need for developer expertise; just the opposite. Working effectively with AI agents actually raises the bar for what developers need to know. I wrote about that experience gap in an earlier O’Reilly Radar piece called “The Cognitive Shortcut Paradox.” The developers who get the most from working with AI coding tools are the ones who already know what good software looks like, and can often tell if the AI wrote it.The idea that AI tools work best when experienced developers are driving them matched everything I’d observed. It rang true, and I wanted to prove it in a way that other developers would understand: by building software. So I started building a specific, practical approach to agentic engineering built for developers to follow, and then I put it to the test. I used it to build a production system from scratch, with the rule that AI would write all the code. I needed a project that was complex enough to stress-test the approach, and interesting enough to keep me engaged through the hard parts. I wanted to apply everything I’d learned and discover what I still didn’t know. That’s when I came back to Monte Carlo simulations.The experimentI’ve been obsessed with Monte Carlo simulations ever since I was a kid. My dad’s an epidemiologist—his whole career has been about finding patterns in messy population data, which means statistics was always part of our lives (and it also means that I learned SPSS at a very early age). When I was maybe 11 he told me about the drunken sailor problem: A sailor leaves a bar on a pier, taking a random step toward the water or toward his ship each time. Does he fall in or make it home? You can’t know from any single run. But run the simulation a thousand times, and the pattern emerges from the noise. The individual outcome is random; the aggregate is predictable.I remember writing that simulation in BASIC on my TRS-80 Color Computer 2: a little blocky sailor stumbling across the screen, two steps forward, one step back. The drunken sailor is the “Hello, world” of Monte Carlo simulations. Monte Carlo is a technique for problems you can’t solve analytically: You simulate them hundreds or thousands of times and measure the aggregate results. Each individual run is random, but the statistics converge on the true answer as the sample size grows. It’s one way we model everything from nuclear physics to financial risk to the spread of disease across populations.What if you could run that kind of simulation today by describing it in plain English? Not a toy demo but thousands of iterations with seeded randomness for reproducibility, where the outputs get validated and the results get aggregated into actual statistics you can use. Or a pipeline where an LLM generates content, a second LLM scores it, and anything that doesn’t pass gets sent back for another try.The goal of my experiment was to build that system, which I called Octobatch. Right now, the industry is constantly looking for new real-world end-to-end case studies in agentic engineering, and I wanted Octobatch to be exactly that case study.I took everything I’d learned from teaching and observing developers working with AI, put it to the test by building a real system from scratch, and turned the lessons into a structured approach to agentic engineering I’m calling AI-driven development, or AIDD. This is the first article in a series about what agentic engineering looks like in practice, what it demands from the developer, and how you can apply it to your own work.The result is a fully functioning, well-tested application that consists of about 21,000 lines of Python across several dozen files, backed by complete specifications, nearly a thousand automated tests, and quality integration and regression test suites. I used Claude Cowork to review all the AI chats from the entire project, and it turns out that I built the entire application in roughly 75 hours of active development time over seven weeks. For comparison, I built Octobatch in just over half the time I spent last year playing Blue Prince.But this series isn’t just about Octobatch. I integrated AI tools at every level: Claude and Gemini collaborating on architecture, Claude Code writing the implementation, LLMs generating the pipelines that run on the system they helped build. This series is about what I learned from that process: the patterns that worked, the failures that taught me the most, and the orchestration mindset that ties it all together. Each article pulls a different lesson from the experiment, from validation architecture to multi-LLM coordination to the values that kept the project on track.Agentic engineering and AI-driven developmentWhen most people talk about using AI to write code, they mean one of two things: AI coding assistants like GitHub Copilot, Cursor, or Windsurf, which have evolved well beyond autocomplete into agentic tools that can run multifile editing sessions and define custom agents; or “vibe coding,” where you describe what you want in natural language and accept whatever comes back. These coding assistants are genuinely impressive, and vibe coding can be really productive.Using these tools effectively on a real project, however, maintaining architectural coherence across thousands of lines of AI-generated code, is a different problem entirely. AIDD aims to help solve that problem. It’s a structured approach to agentic engineering where AI tools drive substantial portions of the implementation, architecture, and even project management, while you, the human in the loop, decide what gets built and whether it’s any good. By “structure,” I mean a set of practices developers can learn and follow, a way to know whether the AI’s output is actually good, and a way to stay on track across the life of a project. If agentic engineering is the discipline, AIDD is one way to practice it.In AI-driven development, developers don’t just accept suggestions or hope the output is correct. They assign specific roles to specific tools: one LLM for architecture planning, another for code execution, a coding agent for implementation, and the human for vision, verification, and the decisions that require understanding the whole system.And the “driven” part is literal. The AI is writing almost all of the code. One of my ground rules for the Octobatch experiment was that I would let AI write all of it. I have high code quality standards, and part of the experiment was seeing whether AIDD could produce a system that meets them. The human decides what gets built, evaluates whether it’s right, and maintains the constraints that keep the system coherent.Not everyone agrees on how much the developer needs to stay in the loop, and the fully autonomous end of the spectrum is already producing cautionary tales. Nicholas Carlini at Anthropic recently tasked 16 Claude instances to build a C compiler in parallel with no human in the loop. After 2,000 sessions and $20,000 in API costs, the agents produced a 100,000-line compiler that can build a Linux kernel but isn’t a drop-in replacement for anything, and when all 16 agents got stuck on the same bug, Carlini had to step back in and partition the work himself. Even strong advocates of a completely hands-off, vibe-driven approach to agentic engineering might call that a step too far. The question is how much human judgment you need to make that code trustworthy, and what specific practices help you apply that judgment effectively.The orchestration mindsetIf you want to get developers thinking about agentic engineering in the right way, you have to start with how they think about working with AI, not just what tools they use. That’s where I started when I began building a structured approach, and it’s why I started with habits. I developed a framework for these called the Sens-AI Framework, published as both an O’Reilly report (Critical Thinking Habits for Coding with AI) and a Radar series. It’s built around five practices: providing context, doing research before prompting, framing problems precisely, iterating deliberately on outputs, and applying critical thinking to everything the AI produces. I started there because habits are how you lock in the way you think about how you’re working. Without them, AI-driven development produces plausible-looking code that falls apart under scrutiny. With them, it produces systems that a single developer couldn’t build alone in the same time frame.Habits are the foundation, but they’re not the whole picture. AIDD also has practices (concrete techniques like multi-LLM coordination, context file management, and using one model to validate another’s output) and values (the principles behind those practices). If you’ve worked with Agile methodologies like Scrum or XP, that structure should be pretty familiar: Practices tell you how to work day-to-day, and habits are the reflexes you develop so that the practices become automatic.Values often seem weirdly theoretical, but they’re an important piece of the puzzle because they guide your decisions when the practices don’t give you a clear answer. There’s an emerging culture around agentic engineering right now, and the values you bring to your project either match or clash with that culture. Understanding where the values come from is what makes the practices stick. All of that leads to a whole new mindset, what I’m calling the orchestration mindset. This series builds all four layers, using Octobatch as the proving ground.Octobatch was a deliberate experiment in AIDD. I designed the project as a test case for the entire approach, to see what a disciplined AI-driven workflow could produce and where it would break down, and I used it to apply and improve the practices and values to make them effective and easy to adopt. And whether by instinct or coincidence, I picked the perfect project for this experiment. Octobatch is a batch orchestrator. It coordinates asynchronous jobs, manages state across failures, tracks dependencies between pipeline steps, and makes sure validated results come out the other end. That kind of system is fun to design but a lot of the details, like state machines, retry logic, crash recovery, and cost accounting, can be tedious to implement. It’s exactly the kind of work where AIDD should shine, because the patterns are well understood but the implementation is repetitive and error-prone.Orchestration—the work of coordinating multiple independent processes toward a coherent outcome—evolved into a core idea behind AIDD. I found myself orchestrating LLMs the same way Octobatch orchestrates batch jobs: assigning roles, managing handoffs, validating outputs, recovering from failures. The system I was building and the process I was using to build it followed the same pattern. I didn’t anticipate it when I started, but building a system that orchestrates AI turns out to be a pretty good way to learn how to orchestrate AI. That’s the accidental part of the accidental orchestrator. That parallel runs through every article in this series.Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.The path to batchI didn’t begin the Octobatch project by starting with a full end-to-end Monte Carlo simulation. I started where most people start: typing prompts into a chat interface. I was experimenting with different simulation and generation ideas to give the project some structure, and a few of them stuck. A blackjack strategy comparison turned out to be a great test case for a multistep Monte Carlo simulation. NPC dialogue generation for a role-playing game gave me a creative workload with subjective quality to measure. Both had the same shape: a set of structured inputs, each processed the same way. So I had Claude write a simple script to automate what I’d been doing by hand, and I used Gemini to double-check the work, make sure Claude really understood my ask, and fix hallucinations. It worked fine at small scale, but once I started running more than a hundred or so units, I kept hitting rate limits, the caps that providers put on how many API requests you can make per minute.That’s what pushed me to LLM batch APIs. Instead of sending individual prompts one at a time and waiting for each response, the major LLM providers all offer batch APIs that let you submit a file containing all of your requests at once. The provider processes them on their own schedule; you wait for results instead of getting them immediately, but you don’t have to worry about rate caps. I was happy to discover they also cost 50% less, and that’s when I started tracking token usage and costs in earnest. But the real surprise was that batch APIs performed better than real-time APIs at scale. Once pipelines got past the 100- or 200-unit mark, batch started running significantly faster than real time. The provider processes the whole batch in parallel on their infrastructure, so you’re not bottlenecked by round-trip latency or rate caps anymore.The switch to batch APIs changed how I thought about the whole problem of coordinating LLM API calls at scale, and led to the idea of configurable pipelines. I could chain stages together: The output of one step could become the input to the next, and I could kick off the whole pipeline and come back to finished results. It turns out I wasn’t the only one making the shift to batch APIs. Between April 2024 and July 2025, OpenAI, Anthropic, and Google all launched batch APIs, converging on the same pricing model: 50% of the real-time rate in exchange for asynchronous processing.You probably didn’t notice that all three major AI providers released batch APIs. The industry conversation was dominated by agents, tool use, MCP, and real-time reasoning. Batch APIs shipped with relatively little fanfare, but they represent a genuine shift in how we can use LLMs. Instead of treating them as conversational partners or one-shot SaaS APIs, we can treat them as processing infrastructure, closer to a MapReduce job than a chatbot. You give them structured data and a prompt template, and they process all of it and hand back the results. What matters is that you can now run tens of thousands of these transformations reliably, at scale, without managing rate limits or connection failures.Why orchestration?If batch APIs are so useful, why can’t you just write a for-loop that submits requests and collects results? You can, and for simple cases a quick script with a for-loop works fine. But once you start running larger workloads, the problems start to pile up. Solving those problems turned out to be one of the most important lessons for developing a structured approach to agentic engineering.First, batch jobs are asynchronous. You submit a job, and results come back hours later, so your script needs to track what was submitted and poll for completion. If your script crashes in the middle, you lose that state. Second, batch jobs can partially fail. Maybe 97% of your requests succeeded and 3% didn’t. Your code needs to figure out which 3% failed, extract them, and resubmit just those items. Third, if you’re building a multistage pipeline where the output of one step feeds into the next, you need to track dependencies between stages. And fourth, you need cost accounting. When you’re running tens of thousands of requests, you want to know how much you spent, and ideally, how much you’re going to spend when you first start the batch. Every one of these has a direct parallel to what you’re doing in agentic engineering: keeping track of the work multiple AI agents are doing at once, dealing with code failures and bugs, making sure the entire project stays coherent when AI coding tools are only looking at the one part currently in context, and stepping back to look at the wider project management picture.All of these problems are solvable, but they’re not problems you want to solve over and over (in both situations—when you’re orchestrating LLM batch jobs or orchestrating AI coding tools). Solving these problems in the code gave some interesting lessons about the overall approach to agentic engineering. Batch processing moves the complexity from connection management to state management. Real-time APIs are hard because of rate limits and retries. Batch APIs are hard because you have to track what’s in flight, what succeeded, what failed, and what’s next.Before I started development, I went looking for existing tools that handled this combination of problems, because I didn’t want to waste my time reinventing the wheel. I didn’t find anything that did the job I needed. Workflow orchestrators like Apache Airflow and Dagster manage DAGs and task dependencies, but they assume tasks are deterministic and don’t provide LLM-specific features like prompt template rendering, schema-based output validation, or retry logic triggered by semantic quality checks. LLM frameworks like LangChain and LlamaIndex are designed around real-time inference chains and agent loops—they don’t manage asynchronous batch job lifecycles, persist state across process crashes, or handle partial failure recovery at the chunk level. And the batch API client libraries from the providers themselves handle submission and retrieval for a single batch, but not multistage pipelines, cross-step validation, or provider-agnostic execution.Nothing I found covered the full lifecycle of multiphase LLM batch workflows, from submission and polling through validation, retry, cost tracking, and crash recovery, across all three major AI providers. That’s what I built.Lessons from the experimentThe goal of this article, as the first one in my series on agentic engineering and AI-driven development, is to lay out the hypothesis and structure of the Octobatch experiment. The rest of the series goes deep on the lessons I learned from it: the validation architecture, multi-LLM coordination, the practices and values that emerged from the work, and the orchestration mindset that ties it all together. A few early lessons stand out, because they illustrate what AIDD looks like in practice and why developer experience matters more than ever. You have to run things and check the data. Remember the drunken sailor, the “Hello, world” of Monte Carlo simulations? At one point I noticed that when I ran the simulation through Octobatch, 77.5% of the sailors fell in the water. The results for a random walk should be 50/50, so clearly something was badly wrong. It turned out the random number generator was being re-seeded at every iteration with sequential seed values, which created correlation bias between runs. I didn’t identify the problem immediately; I ran a bunch of tests using Claude Code as a test runner to generate each test, run it, and log the results; Gemini looked at the results and found the root cause. Claude had trouble coming up with a fix that worked well, and proposed a workaround with a large list of preseeded random number values in the pipeline. Gemini proposed a hash-based fix reviewing my conversations with Claude, but it seemed overly complex. Once I understood the problem and rejected their proposed solutions, I decided the best fix was simpler than either of the AI’s suggestions: a persistent RNG per simulation unit that advanced naturally through its sequence. I needed to understand both the statistics and the code to evaluate those three options. Plausible-looking output and correct output aren’t the same thing, and you need enough expertise to tell the difference. (We’ll talk more about this situation in the next article in the series.)LLMs often overestimate complexity. At one point I wanted to add support for custom mathematical expressions in the analysis pipeline. Both Claude and Gemini pushed back, telling me, “This is scope creep for v1.0” and “Save it for v1.1.” Claude estimated three hours to implement. Because I knew the codebase, I knew we were already using asteval, a Python library that provides a safe, minimalistic evaluator for mathematical expressions and simple Python statements, elsewhere to evaluate expressions, so this seemed like a straightforward use of a library we’re already using elsewhere. Both LLMs thought the solution would be far more complex and time-consuming than it actually was; it took just two prompts to Claude Code (generated by Claude), and about five minutes total to implement. The feature shipped and made the tool significantly more powerful. The AIs were being conservative because they didn’t have my context about the system’s architecture. Experience told me the integration would be trivial. Without that experience, I would have listened to them and deferred a feature that took five minutes.AI is often biased toward adding code, not deleting it. Generative AI is, unsurprisingly, biased toward generation. So when I asked the LLMs to fix problems, their first response was often to add more code, adding another layer or another special case. I can’t think of a single time in the whole project when one of the AIs stepped back and said, “Tear this out and rethink the approach.” The most productive sessions were the ones where I overrode that instinct and pushed for simplicity. This is something experienced developers learn over a career: The most successful changes often delete more than they add—the PRs we brag about are the ones that delete thousands of lines of code.The architecture emerged from failure. The AI tools and I didn’t design Octobatch’s core architecture up front. Our first attempt was a Python script with in-memory state and a lot of hope. It worked for small batches but fell apart at scale: A network hiccup meant restarting from scratch, a malformed response required manual triage. A lot of things fell into place after I added the constraint that the system must survive being killed at any moment. That single requirement led to the tick model (wake up, check state, do work, persist, exit), the manifest file as source of truth, and the entire crash-recovery architecture. We discovered the design by repeatedly failing to do something simpler.Your development history is a dataset. I just told you several stories from the Octobatch project, and this series will be full of them. Every one of those stories came from going back through the chat logs between me, Claude, and Gemini. With AIDD, you have a complete transcript of every architectural decision, every wrong turn, every moment where you overruled the AI and every moment where it corrected you. Very few development teams have ever had that level of fidelity in their project history. Mining those logs for lessons learned turns out to be one of the most valuable practices I’ve found.Near the end of the project, I switched to Cursor to make sure none of this was specific to Claude Code. I created fresh conversations using the same context files I’d been maintaining throughout development, and was able to bootstrap productive sessions immediately; the context files worked exactly as designed. The practices I’d developed transferred cleanly to a different tool. The value of this approach comes from the habits, the context management, and the engineering judgment you bring to the conversation, not from any particular vendor.These tools are moving the world in a direction that favors developers who understand the ways engineering can go wrong and know solid design and architecture patterns…and who are okay letting go of control of every line of code.What’s nextAgentic engineering needs structure, and structure needs a concrete example to make it real. The next article in this series goes into Octobatch itself, because the way it orchestrates AI is a remarkably close parallel to what AIDD asks developers to do. Octobatch assigns roles to different processing steps, manages handoffs between them, validates their outputs, and recovers when they fail. That’s the same pattern I followed when building it: assigning roles to Claude and Gemini, managing handoffs between them, validating their outputs, and recovering when they went down the wrong path. Understanding how the system works turns out to be a good way to understand how to orchestrate AI-driven development. I’ll walk through the architecture, show what a real pipeline looks like from prompt to results, present the data from a 300-hand blackjack Monte Carlo simulation that puts all of these ideas to the test, and use all of that to demonstrate ideas we can apply directly to agentic engineering and AI-driven development.Later articles go deeper into the practices and ideas I learned from this experiment that make AI-driven development work: how I coordinated multiple AI models without losing control of the architecture, what happened when I tested the code against what I actually intended to build, and what I learned about the gap between code that runs and code that does what you meant. Along the way, the experiment produced some findings about how different AI models see code that I didn’t expect—and that turned out to matter more than I thought they would. Post topics: AI & ML Post tags: Commentary

来源:O'Reilly Media发布时间:2026-03-05
思看科技发布SIMSCAN

2026年3月6日,南极熊获悉,三维扫描领域的龙头企业思看科技近日发布了高精度掌上三维扫描仪 SIMSCAN-S GEN 2,这是一款无线便携式的三维扫描仪,支持多模式灵活切换强化球度和平面度约束,能够以更高形状精度,实现尺寸与形状的双重管控,让工业扫描迈向“形状时代”。 640.webp_副本.jpg (338.5 KB, 下载次数: 0) 下载附件 保存到相册 2 小时前 上传 从尺寸到形状 工业扫描迈向“形状时代” 依托光学与算法的协同升级,SIMSCAN-SGen2强化球度与平面度双重管控,以全面的几何控制,定义新一代手持三维扫描仪的精度管控标准。 全系列配备国际测量认证报告,为数据可靠性提供权威背书。 640.webp (1)_副本.jpg (285.69 KB, 下载次数: 0) 下载附件 保存到相册 2 小时前 上传 小底座 大作为 采用新一代可拆卸电池仓和人体工学设计,底座更小巧,操作更稳健舒适;集成智能显示屏,可实时预览与及时决策,有效提升扫描效率与用户体验。 640.webp (2)_副本.jpg (535.59 KB, 下载次数: 0) 下载附件 保存到相册 2 小时前 上传 材质升级 无线便携 机身采用镁合金材质,重量仅560g,轻盈便携。集成边缘计算模组与无线传输技术,彻底摆脱了线缆束缚,实现无固定场景的灵活作业。其数据传输与运算既稳定又高效,显著提升设备的机动性与扫描效率。 640.webp (3)_副本.jpg (287.32 KB, 下载次数: 0) 下载附件 保存到相册 2 小时前 上传 海量捕捉 极速流畅 配备108束蓝色四交叉激光线,能够瞬间捕获海量三维数据,确保扫描流程流畅无卡顿,轻松应对各种复杂工件的高效扫描需求。 640.webp (4)_副本.jpg (379.19 KB, 下载次数: 0) 下载附件 保存到相册 2 小时前 上传 短距视角 扫描无阻 采用短距相机设计,有效规避视角遮挡,对缝隙、深孔、凹槽、流道等隐蔽部位,实现高精度扫描复杂工况也能完整还原复杂结构的三维数据,为后续设计、检测和逆向工程提供全方位、高可靠性的数据支持。 640.webp (5)_副本.jpg (327.36 KB, 下载次数: 0) 下载附件 保存到相册 2 小时前 上传 多模式切换 全场景覆盖 提供高速、精细、深孔三种扫描模式,根据扫描对象自由切换,实现从快速捕捉整体外形到精细复刻表面细节,再到攻克深孔死角难题的无缝转换,一机满足多种复杂需求。 640.webp (6)_副本.jpg (217.32 KB, 下载次数: 0) 下载附件 保存到相册 2 小时前 上传 DefinSight 全场景三维数字化软件平台 适配思看科技自研全场景三维数字化软件平台DefinSight,高效实现从三维扫描、数据获取到数据分析的全流程,带来极致流畅的操作体验。 640.webp (7)_副本.jpg (372.35 KB, 下载次数: 0) 下载附件 保存到相册 2 小时前 上传 南极熊点评:思看科技 SIMSCAN-S Gen2 高精度掌上三维扫描仪,凭借光学与算法的协同升级,实现了球度、平面度的高精度双重管控,还拥有国际测量认证和 ISO 10360 权威标准背书,精度表现亮眼。同时它在设计与性能上全方位突破,560g 镁合金机身兼顾无线便携与操作舒适,108 束蓝色四交叉激光线、短距相机设计搭配高速、精细、深孔三模式切换,既解决了隐蔽部位扫描难题,又能满足全场景复杂扫描需求,再适配自研 DefinSight 全场景三维数字化软件平台,实现了扫描到分析的全流程高效衔接。这款新品不仅打破了高精度与便携性的行业桎梏,更是国产高端三维扫描设备技术实力的彰显,其引领的 “形状精控” 趋势,契合智能制造、数字孪生的发展需求,将拓宽工业扫描应用边界,推动各行业向数据驱动转型。

来源:南极熊3D打印发布时间:2026-03-06
顶刊Nature发布!增材制造实现无机结构的“感知”功能编程

来源:摩方精密 海胆棘刺卓越的机电感知能力,源于其沿[100]轴向的连续变化的梯度多孔结构。香港城市大学吕坚院士团队联合香港理工大学王钻开教授、华中科技大学闫春泽教授与苏彬教授等研究团队,创新性地采用场驱动多拓扑特征耦合设计方法,结合高精度光固化3D打印技术,不仅完美复现了棘刺的梯度孔隙特征,更成功再现了其机电感知功能。 该研究实现增材制造从被动“复制结构”转变为主动“创造功能”,在无机材料/结构与有机生命感知之间建立了前所未有的桥梁。这一里程碑研究成功发表于国际顶刊《Nature》,标题为“Echinoderm stereom gradient structures enable mechanoelectrical perception”,第一作者是香港城市大学陈安南博士、王子秦博士生。 640.webp (41.06 KB, 下载次数: 0) 下载附件 保存到相册 前天 11:21 上传 原位观察表明(图1),活海胆棘刺具备相互独立且高度敏感的触觉感知能力。当棘刺受到滴液刺激时,会在1 s内相对于体壳轴线发生约10°的快速、可观测旋转;相比之下,周围未受刺激的棘刺无任何响应。基于高速成像测量,该机电感知响应的特征时间约为88 ms。采用数据采集系统与数字万用表测得其感知电势峰值约为116 mV,具有实时、可重复的刺激响应性。该感知电位与响应速度较棘皮动物传统视觉高出3个数量级。此外,将棘刺浸没于海水中,在海水流动刺激下同样可检测得峰值约为30 mV的感知电位。通过组织学实验验证了棘刺外表面及其三维结构中均未发现活细胞组织,这表明棘刺感知电势不依赖于存活组织,其背后存在一种此前未被认知的物理/结构起源机制。 640-1.webp (96.61 KB, 下载次数: 0) 下载附件 保存到相册 前天 11:20 上传 图1. 活体海胆棘刺机电感知的原位观察。 SEM与μ-CT结果揭示生物矿化的海胆棘刺沿[001] 方向(由基部至尖端)具有双连续(固相与孔隙相均连续)梯度的多孔立体网状骨架(stereom)。stereom 材料组分以含镁方解石为主,伴随无定形碳酸钙以及少量晶内有机组分(约1.4 wt%)。其微结构表现出高曲率、平滑的最小曲面特征。尤为重要的是,棘刺尖端的多孔结构比根部表现出更小的孔径尺寸、更高的比表面积与孔隙率。该梯度多孔结构有望促进棘刺内部的流体对流与传质过程,从而提升液体在骨架网络中的输运效率。此外,尖端区域更高的孔隙率与更小的孔隙特征尺度可增强固–液界面的相互作用强度,而更大的比表面积则提供更多界面接触与碰撞位点,从而有利于界面过程的发生与放大(图2)。 640-2.webp (123.29 KB, 下载次数: 0) 下载附件 保存到相册 前天 11:20 上传 图2. 海胆棘刺梯度多孔结构分析。 当棘刺完全润湿时,表现出对液体的实时响应电势:流体运动期间产生响应电压,而在流动停止后电压消失。这种响应电势主要源自流动电势。具体而言,棘刺与液体初次接触时发生界面电荷转移,并在固–液界面建立电双层(EDL)。当棘刺被完全润湿后,液体流动对EDL产生剪切作用,诱导界面电荷的分离与重新分布,从而形成流动电势;当流动终止,电荷分离过程随之停止,界面电荷发生回迁与复合,导致电势差消散。在海水流经棘刺时同样可检测到该流动电势。 有限元模拟结果表明:与棘刺基部相比,尖端的孔隙相特征尺寸更小,可显著提高局部流速与液体压力,从而增强剪切驱动的EDL形变与扰动,进而提升界面电荷密度。相应地,测得的流动电势随流速增加而升高,表明流动引起的压力提升可通过进一步压缩EDL来增强表面电荷密度。此外,尖端stereom更高的比表面积有利于提高EDL的建立密度与固–液界面碰撞频次,从而进一步抬升界面电荷密度。综上,沿[001]棘轴方向显著的孔隙相梯度是产生高幅值流动电势的关键结构基础,使棘刺在水环境中获得卓越的机电感知能力(图3)。 640-3.webp (52.88 KB, 下载次数: 0) 下载附件 保存到相册 前天 11:20 上传 图3. 海胆棘刺内的机电感知机制。 受此启发,该研究结合仿生设计与光固化3D打印技术,使用陶瓷和摩方精密HTL树脂材料构建了仿棘刺梯度结构的人工样品。实验结果表明,不同材料仿生梯度结构均可复现机电感知功能,且其电压输出比无梯度结构提升约3倍,响应振幅提高约8倍。这些结果证明了梯度多孔结构赋能机电感知功能的材料普适性与梯度结构依赖性(图4)。进一步,构建的仿生三维超材料机械感受器,在无需额外外置电源的条件下,可在水下获得具有时间分辨率的自监测信息。与传统微格子/多孔结构机械感受器相比,该仿生三维超材料在可制造性、结构设计自由度、材料体系通用性、几何与性能可控性以及水下时间分辨自感知能力方面表现出更优的综合性能,有望服务于海洋环境监测、智能化水下探测以及水资源管理等多种应用场景。 640-4.webp (70.99 KB, 下载次数: 0) 下载附件 保存到相册 前天 11:20 上传 图4. 梯度细胞结构赋予机电感知的通用性、实用性和适用性。 重要意义与未来展望 此项研究将增材制造的研究前沿,从宏观拓扑优化推向更具挑战的微观结构梯度编程与功能直接集成,为未来尖端智能器件的设计与制造开辟了全新路径,极大地拓展了其在高端装备、生物医疗等领域的应用深度与广度。更为重要的是,本研究确立了一套“发现自然独特功能→解析其结构本源→增材制造复现与优化”的完整研究范式。这套可复制的蓝图,为未来利用增材制造技术解锁更多自然奥秘、创造新一代多功能材料与智能结构,提供了坚实的方法论基础。 该研究工作的重要学术价值与创新性获得了国际顶尖期刊《Nature》的认可,其专栏“News & Views”对此进行了专题评述。相关信息如下:Gilbert, P. U. P. A. Sea-urchin spines can sense water flow. Nature(News & Views) (2026). DOI: 10.1038/d41586-026-00374-6. 原文链接:https://doi.org/10.1038/s41586-026-10164-9

来源:南极熊3D打印发布时间:2026-03-06
把月壤3D打印成莫来石,俄亥俄州立大学开发月球基地建设新技术

导读:俄亥俄州立大学研究人员发现,利用激光定向能量沉积技术可直接将月壤打印成建筑材料,无需从地球运输工具和结构部件。 在月球上一切都难以替代。每把工具、备件和结构部件都必须从地球发射,成本极其高昂。当从地球发射的每公斤成本都数额巨大时,利用当地材料进行建造就成了必要选择,而非仅是有趣的想法。 2026年3月6日,南极熊获悉,俄亥俄州立大学的研究人员发现了一种利用月球松散层(被称为月壤)结合激光定向能量沉积(LDED)技术来建造月球基础设施的解决方案。这种方法可以消除从地球运输每个工具和结构部件的需求,显著降低月球建造成本。 lunar-regolith-3d-printed-habitat-construction.jpg (63.44 KB, 下载次数: 0) 下载附件 保存到相册 3 小时前 上传 关于月壤 月壤是数十亿年陨石撞击形成的松散岩层,储量丰富且无毒。 为何选择激光定向能量沉积(LDED)? LDED提供了一种解决方案,可以直接将月壤送入激光熔池,工作原理更像机器人焊接而非传统3D打印机。其主要优势包括: - 可在现有表面上进行构建 - 可现场修复受损结构(不仅限于在封闭舱室内制造新零件) - 不需要激光粉末床熔融那样的大型粉末床 - 不需要粘结剂喷射那样使用化学粘结剂 研究结果 发表在《Acta Astronautica》上的研究使用了LHS-1(月球高地月壤模拟物)进行测试,变量包括不同大气环境、激光功率和扫描速度,测量指标包括附着力、孔隙率和微观结构。 关键发现:相变 在适当条件下,月壤可以转化为莫来石——一种以热稳定性和机械强度著称的陶瓷材料。 最佳参数 - 激光功率:64瓦 - 扫描速度:6毫米/秒 - 最佳基底:氧化铝-硅酸盐陶瓷基底可产生强层间结合 - 失败材料:不锈钢和玻璃在冷却过程中均失败 与NASA阿尔忒弥斯计划的关联 这项研究的时机与NASA的阿尔忒弥斯计划相吻合,该计划推动在本十年末实现人类在月球上的持续存在。支撑这一目标的基础设施将需要来自就地制造,而非等待从地球的补给任务。 目前该研究仍处于实验室阶段,尚未准备好实际月球部署。然而,它为极端和资源受限环境设计的制造系统开发做出了贡献。

来源:南极熊3D打印发布时间:2026-03-06
别让“模糊”拖慢生产效率!信捷高清系列触摸屏,还原真实工业画面

在工业生产场景中,画面显示不够清晰、细节难以辨识,都会直接影响现场监控与操作体验,信捷高清系列触摸屏TS3-700H-E/TS2-700H-E实现显示性能全面升级。以1024*600高分辨率液晶屏为核心,搭配广视角与高清晰两大显示特性,从源头告别模糊显示,精准还原工业现场真实画面,让操作更清晰、监控更直观,全面优化视觉呈现与操作体验,为行业高效稳定生产提供专业显示保障。监测行业精准呈现复杂画面强化全局监控能力针对监测行业对复杂流程图、高密度趋势曲线图的高清呈现需求,通过核心技术升级实现多维度显示优化,彻底解决传统屏幕监控痛点:同屏承载更多信息单屏可同步显示更多数据点、工艺参数、报警信息、监控画面及控制按钮,操作员无需频繁翻页切换,大幅增强复杂生产流程的全局监控能力;细节呈现更清晰对精细工程图纸、高密度趋势曲线图、微小文字及图标均可清晰辨识,从源头降低信息误读概率;视觉舒适度显著提升优化调校的高像素密度,让字体边缘更平滑、图形显示更锐利,有效缓解操作员长时间观屏的视觉疲劳。包装印刷行业高清还原视觉细节 适配专业色彩需求针对包装印刷行业对图案色彩、细节还原的专业要求,完成对比度与亮度双重专业优化,实现画面呈现与环境适配能力的双重提升:图像显示的细节层次更丰富、色彩过渡更自然,强化关键信息辨识度,进一步减少作业中的信息误读风险;屏幕亮度显著提升,即便在车间强光直射的复杂环境下,屏幕内容仍清晰可辨。全场景通用升级特性打破场景限制 适配工业复杂环境突破传统屏幕视角局限-广视角显示在车间多方位、复杂观看角度下,操作员无需调整自身位置即可获得最佳视觉效果,大幅提升工业操作的场景适配性与操作便利性,兼顾多人协同作业的视觉需求。实现四向85°广视角,有效改善传统屏幕在非正对视角下易出现的偏色、泛白、模糊等现象。信捷高清系列触摸屏全维度显示升级,精准适配监测、包装印刷行业及多环境作业需求,以专业显示技术优化人机交互,兼顾监控效率、作业精准性与操作舒适度,为行业高效精准作业持续赋能。

来源:无锡信捷电气股份有限公司发布时间:2026-03-06
Seminar on Japan's Implementation of the 1980 Hague Convention for Diplomatic Missions in Tokyo

Post March 5, 2026 Japanese On March 5, the Ministry of Foreign Affairs of Japan held a seminar on Japan’s implementation of the 1980 Hague Convention for embassies and delegations in Tokyo. The overview of the seminar is as follows: The Ministry of Foreign Affairs of Japan held this seminar for embassies and delegations in Tokyo, as it recognizes the importance of informing relevant stakeholders abroad that Japan is steadily implementing the Convention on the Civil Aspects of International Child Abduction (1980 Hague Convention). In view of the strong interest from overseas, the seminar also served as an opportunity to explain the latest amendment to the Civil Code and other related legislation of Japan in the field of family law, which will take effect on April 1, 2026. The seminar was attended by 31 officials from the diplomatic missions including the embassies of 16 contracting states of the Convention. During the seminar, the Ministry of Foreign Affairs, serving as the Japanese Central Authority under the Convention, explained Japan’s implementation of the Convention, including its solid record on the return of children. Furthermore, the Ministry of Justice provided a briefing on the main points of the latest family law reform. It is expected that the seminar promoted greater understanding among embassies and delegations in Tokyo regarding the implementation of the 1980 Hague Convention and the Civil Code reform in Japan, and that this enhanced understanding will facilitate the work of relevant countries and regions on Hague Convention related matters. Related Links The Hague Convention (The Convention on the Civil Aspects of International Child Abduction)

来源:Ministry of Foreign Affairs of Japan - 外务省发布时间:2026-03-05
兆易创新入股微纳核芯

近日,工商变更信息显示,杭州微纳核芯电子科技有限公司(简称“微纳核芯”)正式完成B+轮融资,新增投资方为兆易创新,此次融资将进一步助力微纳核芯推进核心技术产业化落地,拓展大模型推理芯片应用场景。据悉,相关工商变更已完成登记,兆易创新的入股将实现双方在芯片领域的优势互补,共促产业发展。 公开信息显示,微纳核芯成立于2021年,孵化于浙江省北大信息技术高等研究院,是一家全球技术领先的存算一体AI芯片公司,总部位于杭州,在无锡、合肥等地设有子公司,深耕集成电路设计领域,专注于大模型推理芯片研发与应用。公司全球首创三维存算一体3D-CIM™芯片技术体系,融合存内计算、3D近存及RISC-V存算技术,从根本上消除数据搬运开销,破解了“高性能+低功耗+低成本”的行业难题。 依托核心技术优势,微纳核芯为AI手机、AI PC、IoT及机器人等大模型推理应用,提供高性能、低功耗、高性价比的芯片解决方案,其团队在“芯片设计国际奥林匹克ISSCC”上近六年连续发表十余项突破世界纪录的芯片实测成果,技术实力稳居全球第一梯队。此次兆易创新作为新增投资方入股,将为微纳核芯注入资金支持,同时结合自身在存储芯片领域的资源与技术,助力其加速3D-CIM™技术的产业化与规模化应用,拓展终端应用市场。 据悉,微纳核芯此前已获得红杉中国、小米、立讯精密产投等多家知名机构投资,此次B+轮融资后,公司将进一步加大研发投入,完善技术生态,推动存算一体芯片在更多AI场景的落地,助力我国后摩尔时代AI算力产业高质量发展,抢占存算一体领域国际话语权。

来源:科创板日报发布时间:2026-03-06
成都华微与循态量子签署战略合作,推进量子技术产业化

据成都华微官方消息,近日,公司与上海循态量子科技有限公司正式签署战略合作协议,双方将发挥各自领域优势,携手推进量子信息技术的产业化与规模化应用,共同筑牢国家信息安全量子屏障,助力量子科技从实验室走向实际应用场景。此次合作是双方深化“量子+集成电路”融合发展的重要举措,此前双方已就相关领域合作开展交流座谈,为此次战略合作奠定了坚实基础。 成都华微方面表示,此次战略合作是公司依托自身集成电路产业底座,布局量子安全通信新赛道的关键布局,核心目的是推动量子技术从实验室走向产业化,实现技术成果的落地转化。据悉,成都华微作为国家“909”工程集成电路设计公司,专注于集成电路研发、设计、测试与销售,拥有雄厚的核心技术储备,已形成多系列集成电路产品,可广泛应用于电子、通信等多个领域,具备推动量子技术与集成电路融合应用的坚实基础。 上海循态量子深耕量子科技领域,在量子信息技术研发与应用方面具备专业优势,此次与成都华微达成合作,将实现优势互补、协同发力。双方将聚焦量子信息技术产业化核心需求,整合各自技术、资源优势,推动量子技术与集成电路的深度融合,加速量子安全通信等相关产品的研发与落地,助力我国量子科技产业高质量发展,进一步强化国家信息安全保障能力。本次战略合作的签署,也彰显了成都华微拓展新赛道、提升核心竞争力的战略布局。

来源:全球半导体观察发布时间:2026-03-06
全球首条35微米功率半导体超薄晶圆产线在沪建成投产

3月4日,据上海松江官方发布消息,坐落于松江综合保税区的尼西半导体科技(上海)有限公司正式宣布,其打造的全球首条35微米功率半导体超薄晶圆工艺及封装测试生产线已建成投产,这一突破填补了国内相关制造领域空白,标志着我国功率半导体超薄晶圆技术迈入规模化量产新阶段。 公开信息显示,尼西半导体成立于2007年,是万国半导体(香港)股份有限公司全资子公司,专注于半导体封装、测试及晶圆制造,产品广泛应用于消费类电子产品领域,此次投产的产线是其核心生产基地的重要升级项目。该产线实现工艺与封装测试一体化,厚度仅35微米的超薄晶圆相当于头发丝直径的一半,加工难度极高,此前长期被海外企业垄断核心技术。 据悉,该产线攻克多项技术难关,将晶圆加工精度控制在35±1.5微米,通过化学腐蚀技术消除92%的研磨应力损伤,使极薄晶圆碎片率降至0.1%以内,切割环节采用定制化激光技术,良率达98.5%。产线核心装备由尼西半导体与国内设备厂商联合研发,实现自主可控,测试环节单日产能可达12万颗成品,键合机单日产能约400片。

来源:上海松江发布时间:2026-03-06
帝奥微拟转让江苏云途全部股权

3月5日,江苏帝奥微电子股份有限公司(简称“帝奥微”)发布公告,披露拟出售参股公司股权相关事宜。根据官方公告,帝奥微拟与杭州云控半导体有限公司(简称“杭州云控”)签订《股权收购协议》,由杭州云控以4571.3973万元的价格,受让帝奥微持有的江苏云途半导体有限公司(简称“江苏云途”)17.3262万元注册资本,该部分股权占江苏云途总注册资本的1.9388%。 公告明确,本次交易不构成关联交易,亦不构成重大资产重组,相关事项已通过帝奥微第二届董事会第二十二次临时会议审议通过,无需提交公司股东会审议。交易价格较帝奥微持有的该部分股权账面成本4000万元溢价14.28%,付款方式采用分期付款,杭州云控需在协议先决条件满足后10个工作日内支付40%首期款项,股权变更登记完成后10个工作日内付清余款。 据悉,江苏云途成立于2020年7月,注册资本893.6686万元,主营集成电路芯片设计、制造及销售等业务,目前处于正常运营状态。帝奥微表示,本次交易基于公司发展规划,旨在整合优化资产结构,提高资产流动性和使用效率,增加运营资金以支持主业发展,提升核心竞争力。本次交易完成后,帝奥微将不再持有江苏云途任何股权,彻底退出对该公司的投资。此外,公告提示,本次交易存在交易对方履约及政策法律变化等相关不确定性风险。

来源:全球半导体观察发布时间:2026-03-06
钻石散热技术问世,AMD AI服务器率先采用

随着AI服务器功耗持续攀升,数据中心散热技术除了气冷与液冷外,也开始出现新的材料型方案。新创公司Akash Systems宣布推出采用钻石散热技术(Diamond Cooling)的AI服务器,并率先导入AMD Instinct MI350X GPU平台。 在硬件配置方面,该AI服务器由神达电脑负责制造。系统同时整合两颗第5代AMD EPYC 9005系列处理器、AMD Pensando Pollara 400 AI网络接口卡(NIC)以及AMD ROCm软件平台,用于支持高密度AI训练与推理运算需求。 Akash表示,钻石具有目前已知材料中最高的热导率,其散热效率约为产业常用铜材料的5倍。通过Diamond Cooling技术,GPU与HBM(高带宽内存)温度可降低最高约10°C,减少热降频(thermal throttling)发生概率,使每瓦浮点运算性能(FLOPs/W)最高提升22%,整体AI工作负载吞吐量亦可能提升约15%。 Akash指出,Diamond Cooling可与既有气冷或液冷系统搭配使用,在降低GPU温度的同时提升数据中心运算密度,并有助于降低冷却系统能源消耗与延长硬件使用寿命。 Akash表示,今年内还将推出更多支持Diamond Cooling的AMD Instinct GPU系统,包括AMD Instinct MI355X GPU以及未来世代的AMD Instinct GPU。 值得注意的是,Akash今年2月已向印度最大主权云服务供应商NxtGen AI PVT Ltd交付全球首批采用钻石散热技术的GPU服务器,该系统采用英伟达H200。

来源:科技新报发布时间:2026-03-06
证监会同意盛合晶微科创板IPO注册

3月5日,中国证券监督管理委员会官网发布公告,正式同意盛合晶微半导体有限公司首次公开发行股票并在科创板上市的注册申请,批复文号为证监许可〔2026〕373号,凸显资本市场对硬科技企业的支持力度。 根据证监会官方批复,本次同意注册遵循《中华人民共和国证券法》《首次公开发行股票注册管理办法》等相关规定,盛合晶微需严格按照报送上海证券交易所的招股说明书和发行承销方案实施发行,批复自同意注册之日起12个月内有效。 据悉,盛合晶微是国内集成电路晶圆级先进封测领域骨干企业,主营中段硅片加工、晶圆级封装等业务,此次IPO拟募资48亿元,投向三维多芯片集成封装等项目。其上市申请于2025年10月30日获上交所受理,2026年2月24日过会,仅用四个多月完成审核流程,上市进程高效。该企业业绩稳步增长,2022至2024年营收复合增长率达69.8%,核心技术储备雄厚,此次上市将进一步助力其拓展高端封测市场、提升核心竞争力。

来源:全球半导体观察发布时间:2026-03-06
共71039条记录
  • 1
  • 2
  • 3
  • 4
  • 5920

产业专题

产业大脑平台

产业经济-监测、分析、

研判、预警

数智招商平台

找方向、找目标、管过程

产业数据库

产业链 200+

产业环节 10000+

产业数据 100亿+

企业数据库

工商 司法 专利

信用 风险 产品

招投标 投融资

报告撰写AI智能体

分钟级生成各类型报告