五度易链产业数字化管理平台
拟募资超42亿,宇树科技冲刺科创板!

硬科技赛道再迎重磅节点! 3月20日,上交所官网正式披露,宇树科技股份有限公司科创板IPO申请已获受理,此次IPO拟募资42.02亿元,成为了科创板IPO“预先审阅”机制落地的重点项目。 募资投向:42亿砸向研发与产能 资料显示,宇树科技成立于2016年,总部位于杭州,是全球领先的高性能通用机器人企业、国家级专精特新“小巨人”,由90后技术创业者王兴兴创立,专注于高性能通用人形机器人、四足机器人、机器人核心组件及具身智能模型的研发、生产与销售。 宇树科技的背后,站着众多知名投资机构: ● 美团(汉海信息等)合计持股约9.6488% ● 红杉中国持股7.1149% ● 经纬创投持股5.4528% ● 深创投、腾讯科技等也参与了投资 该公司创始人、实际控制人王兴兴直接持股23.8216%,在表决权差异安排下,合计控制公司68.7816%的表决权。 本次IPO拟募资42.02亿元,将主要用于: ● 智能机器人模型研发项目(占比48.13%,超20亿元)——聚焦具身大模型等“大脑”与“小脑”底层技术 ● 机器人本体研发项目 ● 新型智能机器人产品开发项目 ● 智能机器人制造基地建设项目——建成后预计可实现年产7.5万台人形机器人、11.5万台四足机器人的产能规模 技术实力:全栈自研,剑指“具身智能” 据悉,宇树科技以全栈自研为核心壁垒,自主攻克电机、减速器、控制器、激光雷达等机器人关键核心零部件,叠加高性能感知与运动控制算法,核心部件自研率超95%、国产化率超85%,累计提交国内外专利申请200余项、授权专利180余项,在足式机器人领域稳居全球技术领先地位。 目前,宇树科技的产品矩阵覆盖四足机器人(Laikago、Go/B2系列)与人形机器人(H/G/R三大系列)。其中,四足机器人全球市场份额约60%-70%;人形机器人更是一骑绝尘,全球市场占比32.4%-38%。不仅如此,宇树科技研发的H1、H2人形机器人亮相央视春晚,凭借高自由度仿生设计、精准动作控制,更是成为了国产人形机器人的标志性产品。 得益于强大的技术实力,宇树科技2025年交出了一份堪称惊艳的成绩单: ● 营业收入17.08亿元,同比增长335.36% ● 扣非后净利润6.00亿元,同比增长674.29% ● 人形机器人出货量超5500台,位居全球首位 值得一提的是,2025年前三季度,宇树科技人形机器人收入占比已达51.80%,首次超越四足机器人。这一结构变化,象征着宇树科技正从“四足为王”加速迈向“人形领跑”。 作为国内人形机器人领域的核心玩家,宇树科技凭此次登陆科创板,不仅是企业发展的关键一跃,更标志着中国人形机器人产业正式踏入资本化加速周期,硬科技国货再迎高光时刻。

来源:21IC电子网发布时间:2026-03-21
😺 OpenAI gave GPT-5.4 Mini its own interns

Your browser does not support the audio element. Sign Up · Advertise Welcome, humans. Anthropic just dropped a feature that makes us feel like we're living in the future. Cowork Dispatch lets you text Claude a task from your phone, go make lunch, and come back to finished work on your desktop. One continuous conversation that picks up where you left off. Yes; this is what the future of AI should enable! It's an early research preview, so pair your devices here. Now, if you want the full backstory on how Cowork went from a 10-day prototype to what might be the closest thing to AGI yet (Swyx’s words, not ours, but uh… we get the sentiment!), Latent Space interviewed Anthropic's Felix Rieseberg on why local-first agent workflows matter, why skills may now matter more than MCPs (except in some cases), and why the real frontier is trusted task execution, not better chat. Worth the watch! Here’s what happened in AI today: 😼 OpenAI released GPT-5.4 Mini and Nano, purpose-built "subagents" that work as cheap, fast AI workers for your main AI model 📰 The Pentagon is developing its own AI models to replace Anthropic after their $200M contract collapsed; OpenAI clinched an AWS deal to expand its government footprint 📰 OpenAI is cutting back on side projects (Sora, Atlas browser, hardware) to focus on coding after Claude Code dominance created a "code red" 🍪 Unsloth Studio trains and runs 500+ open AI models locally on your computer 2x faster with 70% less memory 📖 Ben Thompson argues the agentic AI wave is fundamentally different from past tech bubbles P.S: Later today @ 4pm PT, Our very own Corey is going on NVIDIA's livestream! Check him out as he’s interviewed about some of the wild and whacky projects he’s been working on later. Want to reach 675,000 AI-hungry readers? Click here to advertise with us. 😼 OpenAI Released GPT-5.4 Mini (and Your AI Just Hired Its Own Interns)DEEP DIVE: OpenAI Built a Team of AI Interns for Your AI Boss So it seems like every AI company has the same problem right now: their smartest model is too slow and expensive to do everything. How did OpenAI attempt to solve this? They OpenAI released GPT-5.4 mini and nano today, and the play is less about shrinking the model and more about rethinking how AI systems work altogether. These are purpose-built "subagents" (think of them as junior associates that a senior partner delegates tasks to). In Codex, the full GPT-5.4 acts as a project manager: it plans, makes decisions, and coordinates. Then it hands off parallel tasks (searching codebases, reviewing files, running tests) to a swarm of GPT-5.4 minis that execute fast and cheap. It's the McKinsey model, except these consultants actually write code. You can kind of think of subagents as the organizing system that will replace the model router. The router doesn't totally go away; it just gets smarter, using subagents (running faster, cheaper models) to delegate tasks to, abstracting the system one layer deeper than you or I need to worry about picking and assigning the right model to the task. Now, faster and cheaper means nothing if it also means dumber. So here's what the benchmarks say: GPT-5.4 mini scores 54.4% on SWE-Bench Pro (coding benchmark, just 3 points behind the full GPT-5.4) and 72.1% on OSWorld computer-use tasks (testing how good the agent is at using your computer), nearly matching the flagship model. Pricing: Mini runs $0.75 per million input tokens; nano costs just $0.20 (that's $0.05 less than Mercury 2, if I remember correctly). Mini uses 30% of GPT-5.4's Codex quota, so developers get roughly 3x the throughput. Speed: Over 2x faster than GPT-5 mini, with similar or better quality across coding, tool-calling, and vision tasks. Can you use it? GPT-5.4 mini = yes. It's live in the API, Codex, and ChatGPT (free users get it through "Thinking" mode). Nano is API-only atm (conspiracy-minded folks would argue this market positioning is meant to compete directly with Mercury 2...).Why this matters: If you're a regular ChatGPT user, the speed improvements matter most to you. Responses in Thinking mode get faster and better. And if you're mostly using ChatGPT on your phone, it's worth checking out the Codex desktop app (now on Mac and Windows) for heavier work. Codex is a great app; the only problem is it's built for coders and not all of us. We've been reading a lot of takes lately that argue OpenAI needs to give regular business users the same Codex-app capabilities in ChatGPT, or an equivalent work tool. Anthropic's doing something similar with Cowork, which brings Claude Code-style agent capabilities to non-developers. And we also read a tweet that hinted at Anthropic launching a "Codex app killer" sometime next week. Smash that Eyes Looking Emoji Button!Our take: The price is the real story here. We don't really care about "mini" models because frankly, we try to use the best quality model possible, whenever possible. This is cost prohibitive, of course; so if we're going to use less than the best, it better be free or close to it. According to OpenAI, Mini delivers ~95% of GPT-5.4's performance on computer use for a fraction of the cost. But compare it to the broader small model market and it's actually the priciest option: Gemini 3 Flash scores 78% on SWE-bench Verified at $0.50/$3.00. Claude Haiku 4.5 matches Sonnet 4-level quality at $1/$5. And the wildcard is Mercury 2, a diffusion-based model (generates all tokens in parallel instead of one-by-one) that hits ~1,000 tokens/sec at just $0.25/$0.75 (though Nano has Mercury beat here). GPT-5.4 mini is a great model, but "cheapest" belongs to someone else. Could it be "pareto frontier" (the highest intelligence for the lowest cost) level quality though? It might be (just keep refreshing Artificial Analysis until they benchmark it)...FROM OUR PARTNERS Are you risk-ready or risk exposed? Breaches are inevitable. What’s far less clear is whether organizations are truly ready to recover. Based on insights from security and technology leaders worldwide, Cohesity’s Global Cyber Resilience research reveals what the top 6% of resilient organizations do differently. The findings are sobering. Most have faced material attacks, many more than once. But the real value lies beyond the statistics. It shows why some teams recover fast while others absorb lasting operational and reputational damage. From recovery speed to data resilience and the real impact of AI and automation, this is a clear view of what modern resilience looks like in practice. Explore Now🎓 AI Skill of the Day: Lock Down Your AI Agent in 60 Seconds As you probably know by now, OpenClaw is an open-source AI agent that runs tasks autonomously on your machine. The problem? No security guardrails by default. Some of the worst risks included potentially exposed keys, unrestricted file access, and uncontrolled network activity. Security folks called it all variations of dumpster fire when it first released. But now we have NVIDIA NemoClaw, which wraps OpenClaw in a sandboxed runtime called OpenShell that enforces network, filesystem, and privacy policies so your agent can only touch what you approve. Here’s how you (yes you, a non-technical person!) can install it (docs): Step 1: Open your terminal (ask Claude / ChatGPt if you dont know what that is) and paste this one line. It downloads everything, walks you through setup, and creates your sandbox automatically: curl -fsSL https://nvidia.com/nemoclaw.sh | bashStep 2: Connect to your sandboxed agent and start chatting: nemoclaw my-assistant connect openclaw tui That's it. Your agent works, and your data stays locked down. Two commands between "dumpster fire" and "locked vault" is a pretty good trade. Don't want to deal with the terminal? Deploy NemoClaw on Brev and NVIDIA hosts the whole thing for you; one click, no setup, for only $0.13 per hour. If you wanna try the claw but you’ve been too afraid, this is the way.Want more tips like this? Check out our AI Skill of the Day Digest for this month.Have a specific skill you want to learn? Request it here. Trending: Three popular Neuron podcast eps… New episodes air every week on: Spotify | Apple Podcasts | YouTube 🍪 Treats to Try.Claude Cowork now dispatches tasks from your phone or desktop in one continuous conversation that picks up where you left off; assign Claude a task, walk away, and come back to finished work (early research preview; pair your devices here) —free to try. Unsloth Studio trains and runs 500+ open models (Qwen, DeepSeek, Gemma, vision, audio) locally on Mac/Windows/Linux 2x faster with 70% less memory, auto-creates datasets from your PDFs and documents via visual workflows, and exports to all formats (GitHub) —free to try. Gamma Imagine generates brand-specific charts, social graphics, and infographics from text prompts inside Gamma's 100+ presentation templates (integrates with ChatGPT, Claude, Zapier, Atlassian) —free to try. Mistral Forge builds custom AI models trained on your company's proprietary data, policies, and workflows so the model actually knows your business instead of giving generic answers; covers everything from data prep to alignment to production deployment (intro video) —enterprise pricing. Hermes Agent v0.3.0 gives you real-time streaming AI agents across CLI and every platform with a plugin system to share tools and skills, plus live Chrome control, VS Code/Zed/JetBrains integration, and local voice mode —free to try. Manus My Computer brings Manus's AI agent directly to your desktop so it works on your actual files, apps, and browser without uploading anything. Proton Mail Born Private reserves a private email address for your child (from birth to age 15) with zero tracking, no ads, and zero-access encryption; you pick the address, donate $1+ to the Proton Foundation, and unlock it whenever they're ready —$1 minimum donation. 📰 Around the Horn The Pentagon is developing its own large language models to replace Anthropic after their $200M contract collapsed over surveillance and weapons clauses, while OpenAI clinched an AWS deal to expand its government footprint. OpenAI is cutting back on side projects (Sora video app, Atlas browser, hardware device) to refocus on coding and enterprise after Claude Code's dominance created a "code red"; Codex now has 2M weekly active users. Here's a good recap on all their sidequests. Amazon CEO Andy Jassy told employees AI will push AWS to a $600B annual run-rate by 2036, double his prior projection. Microsoft shook up its Copilot AI leadership team, freeing up Mustafa Suleyman from day-to-day management. Want absolutely EVERYTHING that happened in AI this week? Click here! FROM OUR PARTNERS The 2026 B2C ecommerce AI trends Discover how AI-powered search, agentic AI, and personalization are transforming ecommerce. With 61% planning agentic AI adoption and 63% seeing higher purchase likelihood with AI tools, this report reveals what’s driving revenue, loyalty, and competitive advantage in 2026. Get the report📖 Midweek Wisdom: Your reading list for the middle of the week: Agents Over Bubbles — Ben Thompson makes the case that the agentic AI wave is fundamentally different from prior tech bubbles, and why that matters for how you think about what's coming. Post-Apocalyptic Education — Ethan Mollick argues the Homework Apocalypse is already here (82% of undergrads use AI for schoolwork), teachers can't reliably detect it, and students overestimate what they're learning when AI does the work. The Next Phase of Open Models — Nathan Lambert argues open models will win by specialization, not by chasing closed-model performance, splitting into three distinct classes with different strengths. The Karpathy Loop — Fortune profiles how Andrej Karpathy ran 700 autonomous AI experiments in 2 days, and what that tells us about where agents are heading. How to Buy an AI 'Grassroots' Movement — Veronica Irwin investigates the manufactured grassroots campaigns behind AI lobbying and who's really funding them. How to Survive the AI Age: A Concrete Guide — A practical framework for navigating career and life decisions as AI reshapes the economy. A Cat’s Commentary That’s all for now. What'd you think of today's email? 🐾🐾🐾🐾🐾 Like a hit of catnip 🐾🐾🐾 Good, not great 🐾 It sucked Login or Subscribe to participate in polls. P.P.S: Love the newsletter, but only want to get it once per week? Don’t unsubscribe—update your preferences here.

来源:The Neuron发布时间:2026-03-18
😺 Amazon spent $200B and broke its own website

Your browser does not support the audio element. Sign Up · AdvertiseWelcome, humans ICYMI: We interviewed Danny Wu, Canva's Head of AI Products, on how they're building a "Creative Operating System" from 24 billion AI uses. Plus we're LIVE today at 1pm ET breaking down the agent wars: Microsoft, Google, Anthropic, and OpenAI all shipped this week. Special shout out to the sponsor of today’s new video, Cohesity! Check them out. Watch now: YouTube | Spotify | Apple PodcastsLATER TODAY: we're going LIVE today at 10pm PT | 12pm CT | 1pm ET. Every major AI company is shipping agents this week (Microsoft, Google, Anthropic, OpenAI), and we're breaking it all down with demos, hot takes, and beginner-friendly tips. Join us: YouTube | LinkedIn | XHere's what happened in AI today: 🙀 Amazon’s AI-generated code caused a string of outages, including a 6-hour retail site crash. 📰 The U.S. Senate approved GPT, Gemini, and Copilot for official use by Senate aides. 📰 Anthropic launched The Anthropic Institute to study AI's societal impacts. 🎓 How to actually use AI coding tools without breaking everything (hint: think like an architect, not a speedrunner). 🍪 NVIDIA's Nemotron 3 Super just dropped as a 120B open-weight model you can run locally with 1M context. Want to reach 675,000 AI-hungry readers? Click here to advertise with us. 🙀 Amazon's AI Code Is Breaking Amazon If you've ever shipped code on a Friday and immediately regretted it, imagine doing that with AI-generated code... across one of the largest e-commerce platforms on Earth. Amazon just held a mandatory engineering meeting after a string of outages hit its retail website and app, including a six-hour crash last week that left customers unable to check out, see prices, or access their accounts. An internal briefing note described the incidents as having a "high blast radius" and being related to "Gen-AI assisted changes." Here's what we know: Senior VP Dave Treadwell acknowledged that "best practices and safeguards" around AI coding tools aren't fully established yet Junior and mid-level engineers now need senior sign-off on any AI-assisted code changes AWS separately suffered a 13-hour outage in December after its Kiro AI tool deleted and recreated an entire coding environment Amazon has dashboards tracking whether engineers hit minimum daily AI usage targets Amazon disputes that AI wrote the bad code; they say it's a "user error" and "protocol" problem Meanwhile, on Reddit, current and former Amazon engineers paint a grimmer picture. One described "on-calls using AIs to fight each other's AIs in a proxy war of blame." Another said delivering projects matters more than whether projects actually work. Sounds healthy. Amazon also recently laid off 16,000 workers in January, mandated 80% AI tool usage targets (again, dumb; why is “usage” the metric and not “make our products better?”). Oh, and they’re spending $200B on capex this year. So do the math and we have fewer engineers + more AI-generated code = more mandatory WTF did we just break meetings. Why this matters: As The Primeagen and AI researcher Demetri Spanos discussed this week: the models are smart, but they’re not THAT smart. Actually, the practices around how they are used is the real problem. Most of the excitement since December 2025 comes from maturation of agent loops and team workflows, not raw model improvements alone. AI can write code, but the code it writes is often much more verbose than it needs to be, and when some poor human (likely you) has to go in and read it, unlike other large well known codebases around the world, no one knows wtf this one says… including you. So how do you set up your processes to catch the code that AI does write wrong? And how do you set up your systems to get AI to write code efficiently. As Dylan Patel said in a recent Matt Berman interview, some of his team is spending something like $5K a day on Claude Code tokens. How many tokens do you think that guy is spitting out? Amazon will figure this out. But they're learning the lesson every company using AI for production code will eventually learn: the speed you gain means nothing if you can't trust the output. FROM OUR PARTNERS Ever wondered what Reddit’s AI SOC roadmap actually looks like? Watch this exclusive session with the Reddit security team to explore how one of the internet’s largest platforms is building an AI SOC at scale. In this session, you’ll learn: Reddit’s overall approach to AI Key challenges that led Reddit to build an AI SOC The governance policies they’re putting in place to manage AI safely Their forward-looking AI roadmap for the SOC A live walkthrough showing how you can build your own AI-powered SOC Watch here🎓 AI Skill of the Day: The Architect Approach to AI Coding Here’s another insight from Demetri Spanos: Most people using AI coding tools are trying to go 10x faster. That's the wrong goal. In the history of software engineering, a 10-20% productivity improvement is already enormous. So here's the real question: can you get 10% better using AI? If so, you're already winning. The mistake most people make is asking AI to generate an entire project at once. Instead, think like an architect. Here's a practical framework from AI researcher Demetri Spanos: The prompt (this is as much a prompt for YOU the engineer to think through as it is for the AI):I'm building [describe your project]. Before writing any code, help me:1. Break this into separate modules (aim for 10-20 components)2. Define how each module talks to the others (APIs, data flow)3. Estimate the approximate size of each module (ballpark lines of code)4. Identify which modules can be built independentlyThen generate each module one at a time, separately. Each module should be 200-2,000 lines. Do not generate the next module until I've reviewed the current one.Why this works: AI models write too many lines of code when given open-ended prompts. By constraining each generation to a single module with clear boundaries, you get code that's reviewable, testable, and actually maintainable. You also catch errors per-module instead of debugging a 10,000-line blob. Also, by going slower, you can actually keep track of what you’ve built when something does inevitably break.The key insight: Know roughly what the components should be, ballpark the size of the code, evaluate each module separately, then assemble. Maintain quality and increase it by 10%. That's the whole game. Now, for non-AI coders, here’s one for you: Lance Martin built a Claude Code skill that makes it natively understand Claude API features like prompt caching, adaptive thinking, effort control, tools, and more, with ready implementations across eight languages (GitHub). Want more tips like this? Check out our AI Skill of the Day Digest for this month.Have a specific skill you want to learn? Request it here. 🍪 Treats to Try*Asterisk = from our partners (only the first one!). Advertise to 675K+ readers here! *AI+ Renaissance is the headline AI startup conference of the year, held in San Francisco. On March 15, the Sunday before Nvidia's GTC, AI+ is bringing together 2,000 founders, researchers, builders, and investors to chart the future of the AI industry. Ticket sales end soon. Get yours now. NVIDIA Nemotron 3 Super is a 120B open-weight model (12B active parameters) with 1M context and native multi-token prediction for faster inference; now available on Ollama, LM Studio, and Cloudflare Workers AI—free to run locally. OpenAI plans to integrate Sora video generation directly into ChatGPT to push weekly active users toward 1 billion after the standalone Sora app dropped from #1 to #165 on the App Store (thread). Wondering turns any topic into a guided learning path with bite-size visual lessons, active practice, and long-term mastery tracking, from the ex-head of NotebookLM (App Store)—free to try. Proof is a document editor where you and AI agents co-write in real time, with agents leaving comments and suggesting edits you accept or reject, and every character tracked to who wrote it (code)—free to try. 📰 Around the Horn“Should” a.k.a “we’ll see :)” The U.S. Senate approved ChatGPT, Gemini, and Copilot for official use by Senate aides and voted 99-1 to let states continue developing their own AI regulations. Ford launched "Ford Pro AI" to analyze over 1 billion daily data points from 840,000 commercial vehicle subscribers for fleet optimization, route planning, and predictive maintenance. Perplexity announced Personal Computer, an always-on Mac mini AI operating system that gives its assistant persistent local access to your files, apps, and sessions. Anthropic researchers found that 10 out of 16 leading chatbots helped plan violent attack scenarios when prompted; only Claude refused 100% of queries. Want absolutely EVERYTHING that happened in AI this week? Click here! FROM OUR PARTNERS Allow us to reintroduce… Slackbot (now GA) Think of Slackbot as your always-on teammate. It understands your conversations, files, and workflows to deliver what you need, right when you need it without any setup. Instead of hunting for information, Slackbot synthesizes what you need — respecting your permissions and using only what you can already see. Watch nowThursday Trivia One is AI, and one is real. Which is which? Vote below! A.B.Which is AI, and which is real? Which is AI, and which is real? The answer is below, but place your vote to see how your guess everyone else (no cheating now!) A is AI, and B is real. B is AI, and A is real. Login or Subscribe to participate in polls. A Cat’s Commentary More from the TA FamilyMicrosoft Agent Flaw Enables Remote Code Execution via AI Agents — eSecurity Planet Perplexity Comet Browser Bug Leaks Local Files via AI Prompt Injection — eSecurity Planet Trivia Answer: A is AI, and B is real; fair warning, after clicking the video from B, I started getting a lot of Norwegian dance videos in my TikTok algo, and I’m… not mad about it?? That’s all for now. What'd you think of today's email? 🐾🐾🐾🐾🐾 Like a hit of catnip 🐾🐾🐾 Good, not great 🐾 It sucked Login or Subscribe to participate in polls. P.P.S: Love the newsletter, but only want to get it once per week? Don’t unsubscribe—update your preferences here.

来源:The Neuron发布时间:2026-03-12
😺 Meta bought a social network run by bots

Your browser does not support the audio element. Sign Up · Advertise Welcome, humans. So Meta acquired an AI social network where AI agents posted fake content, and the posts went viral anyway. Dead internet theory go brrrrVVVRRROOOOm?! ICYMI, the platform is called Moltbook. It was designed as a social network for AI agents to interact with each other. The problem (or depending on your perspective, the opportunity) was that users couldn't tell which posts were from bots and which were from people. The engagement numbers were apparently so impressive that Meta swooped in to acquire the company and bring its founders into the Superintelligence Labs team. A social network full of fake posts that people can't stop engaging with. Meta must have felt right at home. Say what you want about the strategy, but Meta is clearly betting that the future of social isn't human-to-human; it's human-to-agent-to-human. And honestly? They might be right. But be honest: you know they’re just trying to sell ads to agents and cut out the OpenAI middleman. Classic, Meta! Here’s what happened in AI today: 😻 When AI agents get real access to real systems, this NVIDIA security rule could save your company. 📰 Yann LeCun raised over $1B (Europe's largest-ever seed round) for a new AI lab building world models. 📰 The White House is preparing an executive order to cut all federal ties with Anthropic over "woke" AI safety guardrails. 🎓 NVIDIA's internal "Rule of Two" for keeping AI agents from wrecking your systems. 🧪 ChatGPT now creates interactive visual explanations for 70+ math and science concepts. Want to reach 675,000 AI-hungry readers? Click here to advertise with us. 😻 Everyone's Building AI Agents. Almost Nobody's Securing Them. So The Enterprise AI Strikes Back week has struck again, and this time, it’s Google with some new updates to Gemini in the Drive. Google rolled Gemini into Docs, Sheets, Slides, and Drive for its 3 billion Workspace users. Gemini now writes formulas, pulls data from the web, builds dashboards, and reformats entire presentations on command. It’s basically an agentic collaborator living inside the tools you use eight hours a day. Meanwhile, a wave of innovative new AI labs are building the infrastructure to make agents even more capable:Yann LeCun raised $1B for AMI Labs to build AI that understands the physical world through world models, persistent memory, and planning. Jürgen Schmidhuber, had some thoughts about whose idea that was.Mira Murati's Thinking Machines Lab signed a gigawatt-scale strategic partnership with NVIDIA for next-gen Vera Rubin systems, a deal big enough to quiet skeptics who questioned whether the startup had substance beyond its famous founder. Andrej Karpathy let an autonomous Claude agent run overnight on his codebase. It discovered ~20 improvements and committed them all; no human review required. So who's thinking about security here? We talked to Microsoft about exactly this, so we know they are. But also, actually, so is NVIDIA. On the Latent Space podcast, NVIDIA's Brev team shared their internal "Rule of Two": AI agents can do three things (access files, access the internet, execute code), and you should only ever let them do two at once. Files + code execution? Fine, but kill internet access. Internet + files? Lock down what the agent can do. All three? That's how malware gets injected. Why this matters: Because agents aren't theoretical anymore. One just hacked McKinsey's internal chatbot in under two hours, gaining access to 46.5 million chat messages and 728,000 confidential client files. That wasn't a nation-state attack. It was one autonomous agent exploiting a SQL injection through an unauthenticated API. NVIDIA says they also won't put company data in any model they don't control internally. They run their own models on internal Dynamo clusters. Must be nice. For the rest of us, that means checking your data agreements before letting an agent loose on proprietary information. We just watched V1 of an upcoming podcast episode our editors just cooked up where we talk to Proton about AI and privacy that’ll make your hair stick up; definitely look out for that one coming soon! Our take? The organizations that move fastest with agents will be the ones that draw smart boundaries from day one, not those who ignore security or block everything. As they say, “don’t say no, say how.”FROM OUR PARTNERSHow teams plan to use MCP this year Most teams building AI agents plan to adopt the Model Context Protocol (MCP) this year. Most of those same teams have serious security concerns about it. To understand how teams are navigating this tradeoff, we surveyed hundreds of AI leaders building AI agents for our first-ever state of agentic integrations report. Their top concerns? 70% worry about credential leaks and malicious servers 56% say MCP doesn’t support enterprise search well 51% report ambiguous tool definitions causing incorrect tool calls Get your free copy to learn more. Read the report🎓 AI Skill of the Day: The "Rule of Two" for Agent Security Giving AI agents access to your systems? Here's a framework NVIDIA uses internally. The rule: Agents can do three things; access your files, access the internet, and execute code. Only let them do two at a time. If an agent can read your files and run code, internet access is the vulnerability. Malware from the web runs against your private data. If it has internet and file access, you need to know the exact scope of what it can do. How to apply it:Sandbox first. Run agents in an isolated environment before they touch your network. NVIDIA runs OpenClaw on Brev, a sandboxed VM completely off the corporate network. Build CLIs, not raw API access. A CLI pre-defines the exact commands an agent can run. An agent writing raw API calls? That's the agent deciding what's possible. Involve security early. NVIDIA's team co-designed their sandboxing rather than approving it after the fact. Copy-paste prompt:You are a security review assistant. I'm deploying an AI agent that will have access to [describe: files/internet/code execution]. Using the "Rule of Two" framework (agents should only have 2 of 3 capabilities: file access, internet access, code execution), identify which capability I should restrict, explain why, and suggest specific sandboxing measures for my setup.Want more tips like this? Check out our AI Skill of the Day Digest for this month.Have a specific skill you want to learn? Request it here. Treats to Try *Asterisk = from our partners (only the first one!). Advertise to 675K+ readers here!*Live Event: AI Is Powerful. Why Isn’t It Reliable? Dror Weiss and Eran Yahave, co-CEOs of Tabnine explore why context has become the critical missing layer in enterprise AI.Sign Up for the Live EventChatGPT now creates interactive visual explanations for 70+ math and science concepts where you adjust variables in real-time graphs. Fish Audio S2 generates expressive speech with sub-150 ms latency, multi-speaker in one pass, and inline emotion tags like [laugh] and [whisper] for rapid voice cloning (GitHub, HF model). Free to try. Expo Agent builds native iOS and Android apps from a prompt in React Native, SwiftUI, or Jetpack Compose, then compiles and deploys right from the browser. Join waitlist. Claude Code now includes built-in code review so you can ask it to review, suggest, and apply changes directly in your codebase. Reflct guides daily reflection and mood tracking with personalized AI journaling prompts. Around the Horn Adobe debuted a new AI assistant built directly into Photoshop. Zoom introduced an AI-powered office suite and said AI avatars for meetings arrive this month. YouTube expanded its deepfake detection tool to cover politicians, government officials, and journalists. OpenAI and Google DeepMind employees (including Jeff Dean) filed an amicus brief supporting Anthropic against the US government's supply-chain risk designation. Legora reached a $5.55B valuation as the AI legal-tech boom continues. Want absolutely EVERYTHING that happened in AI this week? Click here! FROM OUR PARTNERS Is your Python AI environment a security blind spot? In Snyk's new white paper, discover the attack surfaces hiding in plain sight within the modern Python AI environment and learn how to regain visibility and control. You’ll learn how to surface toxic flows with MCP scan before they become incidents, generate an AI-BOM, detect shadow AI, and audit AI component risks to maintain continuous compliance. Download Now📖 Midweek Wisdom: Worth reading this week: The Verge: Laid-off lawyers, PhDs, and scientists are now doing precarious gig work training the exact AI replacing their careers. Zvi Mowshowitz breaks down Claude Code, Cowork, and Codex as the first real "AI coworkers," with a clear-eyed look at the autonomy vs. data loss tradeoff. Martin Alderson debunks the viral claim that Claude Code costs Anthropic $5K per user. The real number is closer to $500. The HN thread (296 comments) is worth the scroll. WIRED: Can AI kill the venture capitalist? When startups need less capital and AI automates diligence, the VC model starts looking fragile. VC Cafe: The researcher's new job is writing the spec, not running the experiment. A Cat’s Commentary That’s all for now. What'd you think of today's email? 🐾🐾🐾🐾🐾 Like a hit of catnip 🐾🐾🐾 Good, not great 🐾 It sucked Login or Subscribe to participate in polls. P.S: Before you go… have you subscribed to our YouTube Channel? If not, can you? Click the image to subscribe! P.P.S: Love the newsletter, but only want to get it once per week? Don’t unsubscribe—update your preferences here.

来源:The Neuron发布时间:2026-03-11
😺 🎙️ Watch: It takes 2 years to build a $1B startup now (and 2 weeks to replace 8 years of R&D)

Your browser does not support the audio element. Welcome, humans. Startup seed deals (that fund new companies) are at a six-year low. But the total dollars invested? Roughly the same. That means bigger checks are going to fewer companies—and half of all venture funding now flows to AI-native startups. Meanwhile, the timeline to build a billion-dollar startup has compressed from 7–10 years to 2–3 years. Solo founders are up 10%. And if you slap "AI" on your marketing? One CMO says that's "jazz hands"—and it's backfiring. In our latest podcast episode, we sat down with Nicole Baer, CMO of Carta, to dig into the most comprehensive startup dataset in private markets—and find out what's actually happening under the hood of the AI boom. Plus, keep scrolling for our interview with Dr. Qichao Hu, CEO of SES AI, on the AI agents compressing 8 years of battery R&D into 2 weeks, and more! Click the image to watch on YouTubeHere's some of our favorite parts: (2:59) "Half of all venture funding is going to AI-native startups now. The world of everything else, which used to be 100% of the world, is now 50%." (5:52) Solo founders are up 10% in five years—and it's entirely because of AI. They're building faster, keeping costs down, and scaling without co-founders or early hires. (6:56) Startups are hitting $1B in revenue in 2–3 years instead of 7–10. Nicole breaks down why this is becoming the norm, not the exception. (14:43) "AI in your marketing is jazz hands." Nicole explains why the premium for putting "AI" in your product name is on the wane—and what actually works instead. (20:18) The dilution surprise: AI companies are commanding massive valuations without giving up more ownership. Founders are getting more money while keeping more of their company. (22:26) Are we in an AI bubble? Nicole says no—it's a peak that needs to normalize. There's a difference, and she explains it. (26:52) Nicole's vision for synthetic personas: imagine walking an AI through your entire campaign—billboards, elevators, digital ads, events—and testing how they respond before you spend a dollar. (36:04) Want to raise money outside the Bay Area? The regional data is "even more true" than before. AI has actually hardened San Francisco's grip on startup funding. (47:23) "AI slop is brand destructive." Nicole's warning: if you don't define your brand, it gets defined along the way—and shortcuts with AI content will cost you. Why watch this? Because Carta sits on one of the most comprehensive private-market datasets in the world, and Nicole doesn't hold back. If you're a founder, work at a startup, or invest in them, her point at (20:18) about valuations and dilution alone is worth your time. Watch and/or Listen now: YouTube | Spotify | Apple PodcastsP.S. Nicole also shared how her team at Carta did a massive Claude analysis of all their sales discovery calls—mapping exactly what happens in the first conversation, what the margins look like, and what assets to deploy at each stage. Work that used to require weeks and consulting firms, done by two product marketers. That's at (30:23). Real quick: Want to see your AI-adjacent product or service show up right here, below these podcast promos? Click here to advertise to our 675K readers.JUST LAUNCHED: NVIDIA’s Kari Briski on Nemotron 3, NemoClaw, and much more. In our latest podcast episode, Corey sat down with Kari Briski, NVIDIA's VP of Generative AI for Enterprise, live at GTC 2026 to break down the launch of Nemotron 3 and their new NemoClaw agent shell to make running the OpenClaw agent safe and secure. Check out some of our favorite parts: (1:45) What Nemotron 3 Super actually is, and why NVIDIA published their entire model roadmap. (7:26) The home GPU reality check: Corey's running 120B parameters on an RTX 4000 at triple the speed of a 70B model. (8:09) Why 120 billion parameters only activates 12 billion at a time—and what that means for your hardware. (13:38) The wildest AI agent story yet: an NVIDIA dev's AI caught a water leak, texted him, and emailed a plumber. (17:34) Open-source AI token generation exploded 35x in one year—here's what's driving it. (20:49) Kari's long-term vision: Nemotron as a software development library, not just a model. Listen now on YouTube | Spotify | Apple PodcastsP.S. If you only have 90 seconds, jump straight to the water leak story at (13:38). It's the most convincing AI agent demo we've seen—and it happened by accident. ALSO THIS WEEK: The AI Rewriting Battery Science (and Maybe Everything Else) It used to take 8 years to test whether a new battery material actually works. SES AI just compressed that to 2 weeks. In our second new episode this week, we sat down with Dr. Qichao Hu, Founder, Chairman & CEO of SES AI, to understand how his team built Molecular Universe—an AI database of 10 trillion small organic molecules—and paired it with autonomous "wet lab" robots to discover new materials at a pace that was physically impossible before. This is one of the clearest examples we've seen of AI agents solving hard physical-world problems—not just digital ones. Click the image above to watch on YouTube!Here's some of our favorite parts:(5:28) A human scientist reads 3–5 papers a day. Their AI agent processes tens of thousands—with perfect memory. That alone compresses a month of idea creation into minutes. (6:30) The autonomous robot that runs 5,000 formulations in one morning. A junior scientist does 10–20 by hand. No errors. No coffee breaks. (9:14) In 40 years, the battery industry screened about 10³ different small molecules. The universe of possibilities? 10⁶⁰. We've barely scratched the surface.(20:05) Molecular Universe isn't just for batteries. It's already being used for detergents, cosmetics, pesticides, oil and gas, and paint. The goal: an encyclopedia of every material on Earth. (29:00) The moment that gave us chills: the AI extracts ~1,000 parameters from battery data. Human scientists can only see 20. The AI's patterns are stronger and more accurate—but we can't explain what they mean. Dr. Hu's words: "It's like a different language, not meant for us human species to understand. But it works."(40:44) SES AI extended humanoid robot battery life from 2–4 hours to a full 8-hour shift. When the battery dies, another robot swaps it—so the humanoid never leaves the line. (46:08) The ultimate flywheel: AI discovers molecules → molecules go into batteries → batteries power the data centers that run the AI. Here's what makes this even wilder: according to The Information, SES AI's model has already produced six electrolyte breakthroughs in nine months—formulations that wouldn't have been possible without the molecular database. Dr. Hu's goal? Fully autonomous, lights-off R&D labs where the only human involvement is the initial prompt. Right now, he says, "we are at a point where it's, like, half human, half machine."In case you’re steeped in Silicon Valley culture, that would be short term bullish for the “Centaur”, the AI-human hybrid model. Why watch this? Because everyone talks about AI agents in software. This is AI agents in science—discovering materials, running real-world experiments, and finding patterns in physics that humans literally cannot perceive. If you want to see what "AI in the real world" actually looks like, start at (29:00). Watch and/or Listen now: YouTube | Spotify | Apple PodcastsP.S. We also asked Dr. Hu about AR glasses—one of the biggest power-constrained products in tech right now. Turns out, some companies are already using Molecular Universe to try to solve that exact problem. That's at (44:30). Also, after the interview, we asked him about sodium batteries (a personal passion area of Grant’s) and he suggested that sodium is actually one of the most searched topics on Molecular Universe after lithium. Gee, wonder why… (lots of progress in this area, like here, here, here, and here). Dive deeper with these resources:Carta's free startup data & insightsSES AIState of Seed deals 2025Molecular UniverseThe Information's coverage of SES AI's breakthroughsArtificial Analysis—compare top AI models Stay curious, The Neuron Team. P.S. If you missed it, Dr. Hu's episode is one of our favorite conversations we've had. An AI that speaks a language humans literally can't understand, but produces results that are more accurate than anything we can do ourselves? Yeah, that one stuck with us. 🔋 And if you haven’t subscribed yet, please do! Click the image below to go to our channel and hit “subscribe” to get notified right when new videos go live. We have a goal to hit 50K subscribers by the end of the year (if not 100K), and we’re almost to 20K! If you like learning about AI, and already watch some of our videos, do us a favor and click here to subscribe today. Stay curious, The Neuron Team That’s all for today, for more AI treats, check out our website. What'd you think of this podcast episode?Pick an answer below, then tell us why with the "additional feedback" option. 🐾🐾🐾🐾🐾 Exactly what I wanted!!! More like this... 🐾🐾🐾 Pretty interesting, for what it was! 🐾 Not for me (and here's why). Login or Subscribe to participate in polls. P.P.S: Love the newsletter, but don’t want to receive these podcast announcement emails? Don’t unsubscribe — adjust your preferences to opt out of them here instead.

来源:The Neuron发布时间:2026-03-20
😺 Over $2B in AI funding hit in a single news cycle
来源:The Neuron发布时间:
😸 Google vs OpenAI: Battle of the Super-Apps
来源:The Neuron发布时间:
😺 The Enterprise AI Platform War Has a Scoreboard Now. Anthropic Is Winning.
来源:The Neuron发布时间:
😺 NVIDIA CEO: "Every company needs an OpenClaw strategy" now
来源:The Neuron发布时间:
华为跨界再出招!成立传媒军团,以技术生态双驱动重塑传媒新格局

近日,科技领域传来一则重磅消息:华为宣布正式进军传媒行业,组建专门的传媒军团,旨在推动传媒行业的数字化转型,构建全场景传媒生态体系。这一举措犹如一颗重磅炸弹,瞬间在科技圈和传媒圈激起千层浪,引发了广泛关注和热烈讨论。华为作为全球知名的科技巨头,在通信、手机、新能源汽车、人工智能等多个领域均取得了令人瞩目的成就。其强大的技术实力和创新能力,一直是引领行业变革的关键力量。此次跨界布局传媒行业,成立传媒军团,充分彰显了华为的战略眼光和技术自信。很多人不禁会问,华为为何要涉足传媒领域?实际上,这背后蕴含着深远的战略布局,是华为数字化生态建设的重要一步。当前,传统传媒行业正面临着诸多挑战,数字化转型迫在眉睫。内容生产效率低下、传播渠道单一、商业化变现困难等问题,严重制约了行业的发展。而华为拥有5G、人工智能、云计算、大数据以及鸿蒙生态等一系列核心技术,能够为传媒行业提供全方位的技术解决方案。通过科技赋能,助力传统媒体、新媒体和影视行业实现数字化升级,打破行业发展的瓶颈,重构传媒生态。华为布局传媒行业,也是完善其全场景生态的重要举措。目前,华为已经构建了一个涵盖手机、平板、电脑、汽车、智能家居等全场景终端的生态系统。而传媒内容作为生态的核心灵魂,只有实现内容与终端的深度融合,才能为用户带来更加完整、便捷的体验。未来,华为传媒军团将打造专属的传媒内容平台,整合影视、资讯、短视频、直播等多元内容,适配华为全场景终端,实现内容在不同设备间的无缝流转,彻底改变用户的内容消费方式。华为传媒军团的核心竞争力在于技术和生态的双轮驱动。在技术方面,华为的5G技术能够确保超高清视频和直播的低延迟、高流畅传输;AI技术可以助力内容的智能生产、推荐和审核,提高内容生产效率和质量;云计算和大数据则为传媒行业提供强大的算力支持和用户数据分析,实现精准传播和商业化变现。在生态方面,华为拥有海量的终端用户,覆盖数亿人群,这为传媒内容提供了庞大的流量入口,有效解决了传媒行业流量不足的难题。业内人士分析认为,华为成立传媒军团并非简单的跨界涉足内容领域,而是要成为传媒行业的“数字化赋能者”。一方面,为传统媒体的转型提供技术支持,帮助媒体机构降低成本、提高效率;另一方面,打造全新的传媒内容生态,联动内容创作者、影视公司和品牌方,构建互利共赢的传媒产业链。未来,华为传媒有望在资讯传播、影视制作、直播电商、数字文化等多个领域发力,推出一系列具有颠覆性的产品和服务,重塑传媒行业的竞争格局。这一消息也引发了网友的广泛热议。许多网友对华为的创新能力表示赞叹,认为华为总是能够敏锐地捕捉到行业的痛点,并通过科技手段加以解决,不愧是国货的标杆。也有传媒行业的从业者表示,华为的加入将推动传媒行业加速数字化转型,淘汰落后产能,促进行业的健康发展。大家纷纷期待华为传媒军团未来的动作,比如是否会推出专属的内容APP,是否会与华为汽车、手机等终端进行联动,打造全新的内容消费场景。华为一路走来,始终坚持以技术创新为驱动,不断突破行业边界。从通信设备到智能手机,从新能源汽车到人工智能,再到如今的传媒行业,每一步都走得坚定而有力。成立传媒军团,不仅是华为生态布局的延伸,更是中国科技企业赋能传统行业的生动实践。凭借技术和生态优势,华为有望在传媒行业数字化转型的浪潮中扮演领军角色,带动整个行业迈向新的发展阶段。

来源:快讯发布时间:2026-03-22
千年楷法映时代新章

千年楷法映时代新章 ——从全国楷书展谈书法守正融新之路 马勇明 《人民日报》(2026年03月22日 第 08 版) 春归星城,墨润湖湘。3月,仲春时节,全国第三届楷书作品展览与“楷圣故里”全国书法名家邀请展在湖南长沙联袂启幕。一展彰显当代楷书创作的多元生态,一展致敬千年楷法的精神渊薮。双展辉映,既是对欧阳询所代表的楷书典范的庄严礼敬,更是对书法艺术如何在传承中创新、在守正中发展的一次叩问与引领,彰显新时代书法工作者的文化自觉与使命担当。 楷法精神: 中华文脉的筋骨与气象 “楷者,法也,式也,模也。”唐代张怀瓘的论断,道出了楷书在汉字体系与中华美学中的突出地位。自汉末萌芽,经钟繇奠基、“二王”革新,至欧阳询于初唐集其大成,楷书历经千年淬炼,终成“方块字”的典范形态。其方正庄严的结体、横平竖直的法则,不仅是视觉形式的极致规范,更承载着中华民族“温柔敦厚”的伦理理想与“中正平和”的审美追求,是中华文明刚健笃实、秩序井然精神气象的具象表达。 回望楷书流变史,其风格演进始终与时代精神同频共振。晋楷清雅妍美,流淌出“人的觉醒”与个性解放的魏晋风度;魏碑朴茂雄强,沉淀着北朝刚健质朴的生命意志与民族交融的蓬勃气象;唐楷法度森严、气象恢宏,映照出大唐盛世的自信昂扬与制度张力。欧阳询的险劲峻拔、颜真卿的雄浑磅礴、柳公权的骨力洞达,无不熔铸着书家的人格气度、时代的风神韵致与文化的深沉力量。颜真卿以“忠臣义士”的凛然气节铸就书法的庙堂气象,柳公权“心正则笔正”的笔谏箴言,将书艺升华为“技进乎道”、修身立德的哲学境界。楷书由此超越技艺范畴,成为中华文化精神、人格理想与美学准则的象征,镌刻着中华民族的集体记忆与文化基因。 当代审视: 在传承与突破中探寻发展之道 本届楷书展的评审结果与入展作品,为当代楷书创作提供了一份“诊断书”。从近1.4万件来稿中脱颖而出的239件作品,呈现出三大鲜明转向,亦折射出亟待破解的深层命题。 积极转向清晰可见。其一,深耕经典,溯源归本。取法唐楷尤其是欧阳询、颜真卿一脉的作品显著增多,占比超六成。创作者在广泛探索后重新向楷书史上的巅峰迈进,深研法度精髓与精神本源。对魏晋小楷、南北朝碑志、写经体的研习亦向精微处掘进,从形似追求转向神韵把握,彰显“取法乎上”的自觉。其二,碑帖交融,破壁求活。创作者突破“魏碑”与“唐楷”的二元壁垒,以智性融合碑学雄强与帖学精微,激活传统法度——或取北碑天然意趣、奇崛造型活化唐楷法度,或以唐楷笔法规律提炼、升华民间书刻之生动。那些法度谨严又生机盎然的作品,正是“化古为我”的探索结晶。其三,回归本体,技道并进。刻意拼接的“展览体”显著式微,创作者转而深耕笔法精微、线质锤炼与结体变化,聚焦书法本体的内在品质。如何在平正中求险绝、在法度中显性情,成为突破创新的关键命题。这标志着创作正从对外在形式的关注,转向对笔墨内在质量的深耕。 然而,挑战与隐忧不容忽视。一是技道分离之困。部分作品或精于技而乏于意,或刻意求变而失法度,暴露出“重技轻道”的创作倾向。二是同质化之虞。对经典若止于表面模仿,或过度依赖流行范式,将导致风格趋同、个性消弭,削弱艺术生命力。三是学养短板之痛。文本单一化、自作诗文能力薄弱、考据疏漏等问题依然存在,书法与文史哲的深层贯通亟待加强,折射出培育“书卷气”的迫切性。四是生态优化之需。评审机制与创作导向的互动关系仍需完善,如何通过制度设计更有效引导“文质兼美”“技道双修”“文墨辉映”,遏制浮躁功利之风,是行业治理必须破解的课题。 守正融新: 书写新时代的书法答卷 立足新时代文化建设生动实践,楷书艺术的繁荣发展必须锚定“守正为基、融新为翼”的实践方略,在传承中开掘,在创新中升华。 守正,是立魂之本。一是守书法之法度。笔法、字法、章法是千年智慧的结晶,是楷书的根本命脉。研习欧阳询的《三十六法》、智永的“永字八法”等经典法则,是楷书传承的必修课,亦是艺术创新的逻辑起点。二是守美学之精神。中正平和的气象、刚健笃实的力量、含蓄隽永的韵致,是楷书的内在魂魄。需以时代语言激活传统美学基因,让古典精神在当代语境中焕发新生。三是守文化之根脉。坚守“书为心画”“字如其人”的传统,将书法作为涵养心性、砥砺品格的重要途径,实现艺品与人品的统一,传承“文以载道”“墨以传情”的文化自觉。 融新,是活力之源。一是融理念之新境。打破碑帖、古今、地域的壁垒,以开放思维贯通传统资源。志于楷书者,需从篆隶中汲取线质,从行草中借鉴韵律,在更宏阔的书法史视野中创新表达范式。二是融表达之新语。在深谙法度的基础上,注入当代审美与时代思考,探索个性化笔墨语言,让古老楷则焕发时代生命力。例如,本届楷书展中融合民间书刻意趣的探索,展现了传统资源的当代转译可能。三是融传播之新途。借助数字技术、新媒体平台创新美育方式,推动楷书艺术走进公共空间、融入日常生活。通过书法进校园、进社区、进网络,提升全民审美素养,实现“共美致用”的文化价值,让千年楷法成为滋养现代生活的精神甘泉。 中国书法家协会倡导“立心铸魂、守正融新、文墨辉映、共美致用”之理念,正是这一方略的集中体现。全国楷书展的意义,不仅在于呈现创作成果,更在于树立审美标杆、廓清发展方向、凝聚行业共识。当观众徜徉于“楷圣故里”的经典之作与全国楷书展的创新探索之间,我们期待这不仅是视觉的盛宴,更能激荡起文化认同的共鸣、使命担当的自觉。楷书长河奔涌不息,滋养着民族精神,塑造着文化筋骨。让我们以敬畏之心守护传统根脉,以创新之志回应时代呼唤,以开放胸怀拥抱未来可能,在千年楷法的基石上,饱蘸新时代的浓墨,书写既有历史厚度、更具时代高度的崭新篇章! (作者为中国书法家协会分党组书记)

来源:人民日报发布时间:2026-03-22
阴阳相半 春染山河(读画)

阴阳相半 春染山河(读画) 杨灿伟 《人民日报》(2026年03月22日 第 08 版) 春分,古称“日中”“日夜分”。此日太阳直射赤道,昼夜均分,阴阳相半,恰是天地最为平衡的时刻。与此同时,古人祭日以祈求年丰,民间则有竖蛋、放风筝、吃春菜等风俗,将对生命的礼赞与时节的理解融入生活日常。 春分的独特魅力,落脚于自然平衡同人文感知的相融交汇,这为艺术家提供了鲜活的创作母题。古往今来,从宫廷画师到文人画家,创作了大量表现这一节气内涵的佳作。这类作品中,“耕织图”较具代表性,承担起农事技艺传播的功能,体现了朝廷对天地时序、民生根本的重视与回应。自宋代以来,“耕织”题材便传承开来。在众多作品中,清代宫廷画家陈枚创作的《耕织图册》巧妙融合中西绘画技巧。该册共绘图46幅,其中的碌碡、布秧两幅如实反映了春耕春种的场景。画中呈现的碌碡平地、布秧撒种,对应春分前后农耕生产的实际内容,整幅画作充满生活气息与生命力。 春分有三候,初候“元鸟至”,二候“雷乃发声”,三候“始电”。清代《墨妙珠林册》中,张若霭绘制的《二十四节气图》(卯册)之“春分”(见图)便勾勒出一派物候图景。画面上方乌云密布,电闪雷鸣;中部山峦绵延;下方树木林立,农人荷耙行走在田间小径上,传递出惜农时之深意。春分亦有三花信:一候海棠,二候梨花,三候木兰。清代杨大章《花鸟册之“海棠燕子”》以细腻的笔触捕捉海棠花的娇态与燕子的灵动,设色明丽又不失雅致。画作以海棠燕子为题,恰是春分物候的绝妙写照,而燕子作为春分的标志性候鸟,在传统文化中象征家宅安宁、子孙繁衍。在这里,花和鸟,是天地消息的体现;笔和墨,则是艺术家对时节流转的感知。 春分时节放纸鸢,是历史悠久的民俗。清代佚名之《欢洽寰区册之“放筝图”》、杨柳青年画《十美图·放风筝》等,皆描绘出人们春日放纸鸢的欢乐场景。画家以不同形式定格放风筝这一瞬间,将人物的欢愉与轻松表现得淋漓尽致。 春分时节还有一个重要习俗是“吃春菜”,传为明代戴进创作的《溪谷采薇图》对此有精细表现。此画以深山溪谷中的一户农家春日生活景象为主,画面中山林景色幽深,雾气弥漫,老农夫妇在田间挖野菜,茅舍内,两名女子正在灶台前烹煮食物,旁边的孩童帮忙生火,忙碌而温馨。 春分过后,便进入春和日丽、万紫千红之时节。春的明艳,亦在画家笔下凝成隽永诗意。清代恽寿平的《湖山春暖图》中,群山连绵起伏,宽广的河面上小舟游荡,数间房屋隐于山林,绿树红花映衬左右。画面用笔飘逸,设色清雅淡润,融宋元意趣于一体,既展现江南山水的灵秀,又满溢恬淡闲适的生活情致,尽显春日惬意。 透过丹青墨韵,可知春分内涵丰富。作为重要节气,春分不只是日历上的一个标记,更凝聚着代代相传的生活智慧。画家们以各自的方式回应着同一个时节,传承着同一种文化记忆,不仅为人们提供安顿身心、观照自然的视觉图像,也提醒我们在当下要感悟、把握这一时节的美好。 (作者为中国艺术研究院美术研究所研究员)

来源:人民日报发布时间:2026-03-22
共83085条记录
  • 1
  • 2
  • 3
  • 4
  • 6924

产业专题

产业大脑平台

产业经济-监测、分析、

研判、预警

数智招商平台

找方向、找目标、管过程

产业数据库

产业链 200+

产业环节 10000+

产业数据 100亿+

企业数据库

工商 司法 专利

信用 风险 产品

招投标 投融资

报告撰写AI智能体

分钟级生成各类型报告