五度易链产业数字化管理平台
Prompts Are a Crutch, Legal AI Needs Memory

By Alex Zilberman, CEO, Chamelio.Legal AI is having its ‘prompt library’ moment. Teams are building internal prompt packs, vendors are shipping prompt templates, and everyone is quietly hoping that better phrasing will equal better outcomes. It won’t. Prompt libraries rot and don’t scale. Memory turns every negotiation into a compounding advantage by learning what gets accepted, rejected, escalated, and ultimately signed, and then producing more consistent outputs over time. And it’s not just negotiations. Memory becomes truly powerful when it’s gathered from all your legal AI interactions, across: Word based negotiations (what you apply, rewrite, ignore, and why) Web agent legal research (what sources you trust, what you cite, what you discard) Questions asked and answered (what your team asks repeatedly, and what ‘good’ answers look like internally) Querying your contract repository and legal knowledge base (how you search, what you pull, what ends up being used) That difference, one off instructions versus compounding learning, is where the next real wave of legal AI will be won. Prompting doesn’t compound. It decays. Prompting is a human workaround for a system that doesn’t know your reality. It usually starts strong. A few power users build prompts that genuinely help. Then the organization grows, standards evolve, fallback language changes, business priorities shift, and new issues show up. Suddenly the ‘great prompts’ are: Incomplete (they don’t reflect the latest internal standards) Inconsistent (different lawyers edit them differently) Brittle (small changes produce big output swings) Invisible (no one can tell which version is the one) So the prompt library becomes what every shared doc becomes, a graveyard of half truths. In legal work, the failure isn’t that the AI can’t write. It’s that the AI can’t reliably write to your standards, under your constraints, and keep doing that as those standards shift. That can’t be solved with more prompt engineering. It can only be solved with memory. Legal work is repeatable judgment under constraints Legal work isn’t creative writing. It’s repeated decisions: ‘Do we accept this limitation or carveout?’ ‘When do we escalate this issue, and to whom?’ ‘What’s our fallback position in this scenario?’ ‘How do we explain this risk to the business in a way they’ll actually understand?’ Those decisions aren’t random. They depend on internal policy, risk appetite, context, the matter at hand, and the team’s communication style. And the most valuable part, the part that makes a legal team feel like a legal team, is the consistency of those decisions. That’s why the best legal AI doesn’t just need to be smart. It needs to know what ‘smart’ means for your team, and remember it. That consistency comes from institutional memory. Not prompts. What ‘memory’ means in legal AI (and what it doesn’t) When I say ‘memory,’ I’m not talking about a model that hallucinates a personality. I’m talking about a product layer that captures and reuses what legal teams already do, across the tools they already use. Memory can be as simple as: Which redlines were applied versus discarded (in Word) Which sources were used versus ignored (in research) Which questions get asked repeatedly, and which answers get accepted (in Q and A) Which documents are retrieved from the repository and actually used (in your knowledge base and contract repository) Which issues triggered escalation, and what resolved them What ended up final (signed, filed, approved, or relied upon) Which policies and playbooks were applied in which contexts, and when they were overridden How we explain positions to counterparties (tone, detail level, and what framing gets traction) Who handled which types of requests, and which routing decisions worked This can be stored as structured signals tied to topic, clause type, matter type, jurisdiction, counterparty profile, business unit, and role. It is not ‘training your own model in a black box.’ The governance question: what if it learns the wrong thing? Every GC will ask: ‘If it learns from our actions, can it learn the wrong behavior?’ Yes, unless you build memory like an enterprise system, not a consumer personalization feature. Practical guardrails make this safe: Versioning. Memory tied to a specific policy version, with rollback Scope. Memory applied by matter type, jurisdiction, business unit, and role Confidence thresholds. ‘Suggest’ versus ‘auto apply’ based on evidence volume and risk Human overrides. A single explicit ‘this is the new standard’ action updates the system faster than passive observation Auditability. Show why a suggestion was made, for example ‘based on X accepted outcomes in similar situations’ These controls aren’t optional. They’re what makes memory deployable. The new KPIs for legal AI If your goal is faster, safer legal work, ‘AI usage’ is not the KPI. The KPIs that matter are operational: Variance reduction. fewer different outcomes for the same issue across the team Escalation rate. fewer unnecessary escalations without increasing exceptions Time to answer. how quickly you get to an internally acceptable response (especially in research and Q and A) Time to first clean output. how close the first draft or answer is to acceptable Rework volume. how often lawyers redo the same changes Adoption beyond power users. whether average users get reliably good results Memory should move these numbers. If it doesn’t, it’s not memory, it’s just autocomplete. A simple maturity model: from prompts to memory Level 1: Prompted assistanceGood first drafts and answers. Inconsistent. Heavily user dependent. Level 2: Policy aware assistancePlaybooks, guidance, and source lists embedded, still largely static. Level 3: Memory driven consistencyLearns from accepted and rejected edits, trusted sources, and outcomes. Reduces variance. Level 4: Memory driven executionMemory triggers routing, approvals, follow ups, and governance actions. AI changes behavior across workflows. Most of the market is stuck between Level 1 and Level 2.The winners will climb to Level 3 and 4. The point The next era of legal AI isn’t about making models sound more lawyerly. It’s about building systems that behave like experienced legal teams, consistent, contextual, and continuously improving. Prompts don’t compound. Memory does. In a world where everyone has access to the same frontier models, the compounding layer, the memory layer, becomes the only real competitive advantage in day to day legal work. How We’re Building This at Chamelio This isn’t theoretical. At Chamelio, we’re building legal memory as a product layer across every AI interaction lawyers have. – [ This is a sponsored thought leadership article by Chamelio for Artificial Lawyer. ] Share this:Tweet Click to email a link to a friend (Opens in new window) Email Discover more from Artificial Lawyer Subscribe to get the latest posts sent to your email. Type your email… Subscribe

来源:Artificial Lawyer发布时间:2026-03-05
Product Walk Through: Harvey – Shared Spaces

Today’s AL TV Product Walk Through is all about Harvey’s Shared Spaces, a new product designed to redefine how law firms and clients collaborate on legal work. Please press Play to watch the video, or you can also go direct to the AL TV Channel to watch over 220 other videos for free. AL TV Productions, 2026. And here’s how Harvey describes things: ‘A Shared Space is a secure, branded environment within Harvey where firms and clients can organize and co-create work in one unified, searchable place. Instead of relying on client portals or file-sharing tools, both sides contribute to shared resources in real time, collaborating across internal teams and with external partners. Robust governance controls — including object-level permissions, ethical walls, audit trails, and administrative controls — ensure sensitive work remains protected at every stage. ‘For law firms, Shared Spaces enable differentiated service delivery and deeper client relationships by turning expertise into reusable, compounding value. For in-house teams, they provide real transparency into how outside counsel is working and how knowledge is being built over time. Customers worldwide are already collaborating on Harvey with Shared Spaces.’ If you would like to know more about Shared Spaces then please see here. — Share this:Tweet Click to email a link to a friend (Opens in new window) Email Discover more from Artificial Lawyer Subscribe to get the latest posts sent to your email. Type your email… Subscribe

来源:Artificial Lawyer发布时间:2026-03-05
"WTO members consider new e-commerce proposal and previous submissions ahead of MC14"

Switzerland, on behalf of co-sponsors, presented the proposal for the Ministerial Conference to mandate the establishment of a Committee on Digital Trade that would build on the work done under the Work Programme and provide a stable forum for digital trade discussions at the WTO. This would be in addition to an extension of the moratorium on the imposition of customs duties on electronic transmissions. Members provided their initial comments on the proposal and also expressed views on two previous submissions: one by the African, Caribbean and Pacific (ACP) Group proposing the reinvigoration of the Work Programme with a focus on the development dimension and the extension of the moratorium until the following ministerial conference; and one by the United States and co-sponsors for a permanent moratorium. The facilitator of the Work Programme on Electronic Commerce, Ambassador Richard Brown of Jamaica, welcomed members' active participation and called on proponents to try to find landing zones. "While there are some areas of convergence, some critical substantive differences remain. In this regard, I would like to ask the proponents to engage with each other, to try and identify any common elements, areas of flexibility and the possible way forward," he said. He said discussions will continue ahead of the General Council meeting on 10-11 March to try to identify common elements for a draft ministerial decision. The facilitator also noted that the early version of a database compiling technical assistance and capacity building on e-commerce is now available for members to review and provide feedback on.

来源:世界贸易组织发布时间:
"EU contributes EUR 1 million to strengthen trade capacity in developing economies, LDCs"

This contribution to the WTO's Global Trust Fund (GTF) will help finance the implementation of the WTO's technical assistance plan through targeted capacity-building initiatives. These programmes enable government officials from developing economies including LDCs to deepen their understanding of the multilateral trading system and enhance the implementation of their WTO obligations. WTO Director-General Ngozi Okonjo-Iweala welcomed the EU's contribution: "The European Union's EUR 1 million contribution to the Global Trust Fund comes at a critical time and will help maintain valuable technical assistance activities for developing economies and least developed countries. I welcome the EU's leadership and hope it will encourage other members to step forward in support of the WTO's capacity building work." H.E. Mr João Aguiar Machado, Ambassador and Permanent Representative of the European Union to the WTO, highlighted the importance of continued EU support for WTO technical assistance programmes: "The European Union is pleased to contribute EUR 1 million to the WTO's Global Trust Fund. At a time when sustained and predictable financing for technical assistance is particularly important, this contribution reflects the European Union's longstanding support for the Global Trust Fund and for the WTO's technical assistance programmes. Strengthening the capacity of developing economies including least developed countries is essential for a fair and effective multilateral trading system." The Global Trust Fund finances approximately 280 activities each year, primarily through tailored training delivered at the national and regional levels, including online. These activities span a wide range of trade-related areas, including agriculture, digital trade, import licensing, standards, trade and environment, and trade negotiation skills. Since its establishment in 2001, the Global Trust Fund has supported over 2,800 training workshops worldwide. The year 2024 saw the highest number of technical assistance activities in the last decade, with over 19,000 participants trained throughout the year. Building on more than two decades of constructive cooperation with the WTO, the European Union has contributed a total of CHF 34.5 million to various WTO trust funds. This latest contribution recognizes and reinforces the European Union's commitment to promoting a fair, inclusive and rules-based multilateral trading system.

来源:世界贸易组织发布时间:
E-Lastenräder: Miet­programm „Stuttgarter Rössle“ wird neu ausgerichtet

Die Landeshauptstadt Stuttgart entwickelt ihr E‐Lastenrad‐Vermietprogramm „Stuttgarter Rössle“ weiter und richtet es künftig gezielt an Familien mit begrenzten finanziellen Möglichkeiten aus. Gleichzeitig wurde das Förderprogramm E‐Lastenräder für Stuttgarter Familien zum Jahresende 2025 beendet. Ein Abschlussbericht bescheinigt dem Programm eine hohe verkehrliche und ökologische Wirkung.

来源:STUTTGART发布时间:2026-03-05
Stuttgart und Nanjing bekräftigen engere Kooperation in Wirtschaftsthemen

Die Landeshauptstadt Stuttgart und die Hauptstadt der chinesischen Provinz Jiangsu, Nanjing, haben eine gemeinsame Absichtserklärung zur weiteren Vertiefung ihrer über 30-jährigen Partnerschaft unterzeichnet. Die Vereinbarung wurde im Stuttgarter Rathaus von Oberbürgermeister Frank Nopper und seinem Amtskollegen Li Zhongjun unterzeichnet. Ziel ist es, die Zusammenarbeit der beiden Städte insbesondere in wirtschaftlichen und technologischen Zukunftsfeldern auszubauen und die 30-jährige Partnerschaft beider Städte damit in eine neue Ära zu führen.Vor der Unterzeichnung hatten Vertreter beider Städte bereits wirtschaftliche Themen im neuen Haus des Tourismus diskutiert. Dabei konkretisierte Oberbürgermeister Nopper, wozu man einander im Bereich der Wirtschaft brauche: „Als Kooperati­onspartner, als Abnehmer, als Investoren – zum wechselseitigen Nutzen sowie zu fai­ren Bedingungen und Spielregeln.“

来源:STUTTGART发布时间:2026-03-05
Dienst für das Kulturdenkmal: Ehrenamtliche betreuen das Alte Rathaus

Es ist viel Herzblut im Spiel, wenn es um das Alte Rathaus von Weilimdorf geht. Das 421 Jahre alte Gebäude unterhalb der Oswaldkirche „ist unser Schmuckstück“, sagt Julian Schahl, der Vorsteher des 32.000 Einwohner zählenden Stadtbezirks. Mehr als 150 Veranstaltungen hat dieses Schmuckstück im vergangenen Jahr eine Bühne gegeben. Für viele Verliebte hat das Alte Rathaus einen besonderen Platz in ihrem Herzen: Allein 2025 haben dort 30 Paare Ja zueinander gesagt.Dass das Schmuckstück in der Ortsmitte auch 2026 ein Ort des Miteinanders, der Kultur und ein Wunschtrauort sein kann, das ist dem engagierten Team des Heimatkreises zu verdanken. Denn die Vorsitzende Edeltraud John und Walter Niklos haben sich zusammen mit Bärbel Breuel, Loni Huss, Bernd Klingler und Jannette Wöhrle bereit erklärt, vorerst die Hausmeistertätigkeiten im Alten Rathaus zu übernehmen – im Ehrenamt. /service/aktuelle‐meldungen/2026/maerz/dienst‐fuer‐das‐kulturdenkmal‐ehrenamtliche‐betreuen‐das‐alte‐rathaus.php.media/445991/weilimdorf‐gesamt‐kovalenko.jpg.scaled/f217086c5c8cca2698e9fddd986f6318.jpgHistorisches Ensemble: das Alte Rathaus von 1605 unterhalb der Oswaldkirche.

来源:STUTTGART发布时间:2026-03-05
Waffen- und Messerverbotszone in Stuttgart zeigt Wirkung

Stuttgarts Oberbürgermeister Dr. Frank Nopper, der für die Ausweisung der Waffen‐ und Messerverbotszone zuständig ist, und Polizeipräsident Markus Eisenbraun zeigen sich sehr zufrieden über den deutlichen Rückgang schwerer Straftaten mit dem Tatmittel Messer im Geltungsbereich der Waffen‐ und Messerverbotszone im Kernbereich der Stuttgarter Innenstadt. Die Straftaten gegen Leib und Leben sowie die Rohheitsdelikte sind von 91 Fällen im Jahr 2024 auf 46 Fälle im Jahr 2025 zurückgegangen und haben sich damit nahezu halbiert. „Die Ausweisung der Waffen‐ und Messerverbotszone hat Sicherheit und Sicherheitsgefühl im Kern unserer Innenstadt stark erhöht. Die Zahlen zeigen eindrucksvoll, dass wir auf dem richtigen Weg sind.“Oberbürgermeister Dr. Frank NopperNopper weiter: „Wir sollten unseren klaren Kurs fortsetzen und die starke Polizeipräsenz auch im laufenden Jahr aufrechterhalten.“

来源:STUTTGART发布时间:2026-03-05
Einwohnerversammlung in Stuttgart-Nord: Jetzt Themen online mitbestimmen

Das Online‐Verfahren ist auf dem städtischen Beteiligungsportal Stuttgart – meine Stadt (Öffnet in einem neuen Tab) zu erreichen und gliedert sich in zwei Phasen: 9. bis 23. März: Interessierte können aus einer Liste die Themen ankreuzen, die ihnen für ihren Stadtbezirk am wichtigsten sind. 24. März bis 13. April: In dieser Zeit können konkrete Anliegen und Fragen online eingereicht werden. Zudem besteht die Möglichkeit, die Beiträge anderer Teilnehmer zu bewerten. Fragen, die über das Portal eingereicht werden, beantwortet die Fachverwaltung direkt online auf der Plattform. Sie werden am Abend der Versammlung nicht automatisch erneut aufgerufen.Wer am 27. April persönlich eine Frage an die Stadtspitze richten möchte, kann dies direkt vor Ort tun. Hierzu liegen im Eingangsbereich der Sparkassenakademie Karten aus. Einwohnerinnen und Einwohner des Stadtbezirks Stuttgart‐Nord können dort ihr Anliegen notieren und sich für einen Wortbeitrag anmelden. Oberbürgermeister Dr. Frank Nopper wird die Teilnehmenden im Laufe des Abends einzeln aufrufen.

来源:STUTTGART发布时间:2026-03-05
Stadt fördert Kunst- und Kulturprojekte mit neuem Förderfonds „Kulturelle Teilhabe“

Künstlerinnen und Künstler, Vereine, Organisationen, Initiativen sowie soziale Träger wie Mehrgenerationenhäuser, Jugendhilfen oder Verbände können jetzt finanzielle Unterstützung über den neuen Förderfonds „Kulturelle Teilhabe“ erhalten.Bis zum 15. Mai können Anträge für eine Einzelprojektförderung sowie für die Unterstützung neuer Partnerschaften zwischen Kunstschaffenden und Zivilgesellschaft im Rahmen der „Förderung von Partnerschaften und Kooperationen für gesellschaftlichen Zusammenhalt“ eingereicht werden.Kleine, unterjährige Projekte können auch außerhalb dieser Frist gefördert werden. Insgesamt gibt es drei Möglichkeiten, Fördergelder aus diesem Fonds zu bekommen. Der Gemeinderat der Landeshauptstadt Stuttgart hat insgesamt 100.000 Euro für den neuen Fördertopf bewilligt.

来源:STUTTGART发布时间:2026-03-05
From jargon to clarity: bridging understanding through graded simplification of legal data - Artificial Intelligence and Law

Abstract Legal documents are notorious for their length, density, and jargon-heavy language, making them challenging to navigate and comprehend. This highlights a strong need for clear and accessible documentation for a diverse audience. Text simplification at multiple levels, tailored to individuals with diverse backgrounds and expertise, is essential in making legal content universally accessible. To this effect, in this work, we focus on paragraph-level simplification of legal contracts and introduce Graded Simplification for Legal Data, a framework that adapts contract clauses across three competency levels: Skilled, Intermediate, and Basic. We employ Large language models (LLMs) to perform graded simplification, supported by a Token efficient Compression mechanism that incrementally encodes document context across paragraphs within fixed tokens, making it well suited to lengthy contracts. To address the challenge of reliably evaluating legal simplification at scale, we design a multi-criteria evaluation framework that jointly assesses readability, lexical simplicity, semantic preservation, and entailment. This framework enables the creation of our key resource, the SimpLegal dataset, an English-language preference dataset of paragraph-level contract simplifications. Using this dataset for Direct Preference Optimization (DPO), we achieve notable gains (\(\uparrow \)5 points) in readability and simplicity over zero-shot prompting-based baselines. Collectively, these contributions underscore the importance of graded, paragraph-level simplification for contracts and demonstrate that small and medium-scale LLMs, when fine-tuned on preference data, can achieve performance comparable to larger models, providing a scalable pathway for accessible and comprehensible legal documentation. Our code and dataset are made available at https://github.com/GSLD-SimpLegal/FromJargonToClarity.git. This is a preview of subscription content, log in via an institution to check access. Access this article Log in via an institution Subscribe and save Springer+ from $39.99 /Month Starting from 10 chapters or articles per month Access and download chapters and articles from more than 300k books and 2,500 journals Cancel anytime View plans Buy Now Buy article PDF USD 39.95 Price excludes VAT (USA) Tax calculation will be finalised during checkout. Instant access to the full article PDF. Institutional subscriptions Fig. 1Fig. 2Fig. 3Fig. 4 Explore related subjects Discover the latest articles, books and news in related subjects, suggested using machine learning. Natural Language Processing Techniques for Sentiment Analysis Data Availability The link to code and data has been shared in the manuscript. Noteshttps://www.sec.gov/edgar/search-and-accesshttps://github.com/GSLD-SimpLegal/FromJargonToClarity.git.https://openai.com/chatgptIn our experiments, we observed that the models typically converge to their best response by the third or fourth iteration. We set \(i=2\) and \(j=10\), balancing computational constraints and convergence.Empirically, scores between 0.3–0.5 indicated semantic drift; thus, 0.6 was adopted as a stricter cutoff.https://scikit-learn.org/stable/modules/generated/sklearn.metrics.cohen_kappa_score.html (Cohen’s Kappa implementation in scikit-learn)https://github.com/LexPredict/lexpredict-lexnlp96 candidates \(\times \) 6K paragraphs = 576K total candidate paragraphstop 15 responses selected, (\(6000 \times 15\)); not all pairs were retained, final set ~65000https://pypi.org/project/textstat/ (textstat library for readability metrics)https://scikit-learn.org/stable/modules/generated/sklearn.metrics.cohen_kappa_score.html (Cohen’s Kappa implementation in scikit-learn)https://pingouin-stats.org/generated/pingouin.intraclass_corr.html (Intraclass Correlation Coefficient implementation in Pingouin)https://huggingface.co/meta-llama/Llama-3.3-70B-InstructWe reused publicly available implementations where possible and otherwise implemented faithful reproductions based on the methodologies described in the original papers.ReferencesAbend O, Rappoport A (2013) Universal conceptual cognitive annotation (UCCA). In: Schuetze H, Fung P, Poesio M (eds) Proceedings of the 51st annual meeting of the association for computational linguistics (volume 1: Long Papers), pp 228–238. Association for Computational Linguistics, Sofia, Bulgaria. https://aclanthology.org/P13-1023/Al-Thanyyan SS, Azmi AM (2021) Automated text simplification: a survey. ACM Comput Surv (CSUR) 54(2):1–36Article Google Scholar Asthana S, Rashkin H, Clark E, Huot F, Lapata M (2024) Evaluating LLMs for targeted concept simplification for domain-specific texts. In: Al-Onaizan Y, Bansal M, Chen Y-N (eds) Proceedings of the 2024 conference on empirical methods in natural language processing, pp 6208–6226. Association for Computational Linguistics, Miami, Florida, USA. https://doi.org/10.18653/v1/2024.emnlp-main.357. https://aclanthology.org/2024.emnlp-main.357/Asthana S, Rashkin H, Clark E, Huot F, Lapata M (2024) Evaluating llms for targeted concept simplification for domain-specific texts. In: Proceedings of the 2024 conference on empirical methods in natural language processing, pp 6208–6226Bai Y, Kadavath S, Kundu S, Askell A, Kernion J, Jones A, Chen A, Goldie A, Mirhoseini A, McKinnon C et al (2022) Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073Bakker J, Vendeville B, Ermakova L, Kamps J (2025) Overview of the clef 2025 simpletext task 1: Simplify scientific text. Working Notes of CLEF 3147–3162Bhattacharya P, Hiware K, Rajgaria S, Pochhi N, Ghosh K, Ghosh S (2019) A comparative study of summarization algorithms applied to legal case judgments. In: European conference on information retrieval, pp 413–428. SpringerBhattacharya P, Poddar S, Rudra K, Ghosh K, Ghosh S (2021) Incorporating domain knowledge for extractive summarization of legal case documents. In: Proceedings of the eighteenth international conference on artificial intelligence and law, pp 22–31Blinova S, Zhou X, Jaggi M, Eickhoff C, Bahrainian SA (2023) Simsum: Document-level text simplification via simultaneous summarization. In: Proceedings of the 61st annual meeting of the association for computational linguistics (volume 1: Long Papers), pp 9927–9944Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877–1901 Google Scholar Bubeck S, Chandrasekaran V, Eldan R, Gehrke J, Horvitz E, Kamar E, Lee P, Lee YT, Li Y, Lundberg S, Nori H, Palangi H, Ribeiro MT, Zhang Y (2023) Sparks of Artificial General Intelligence: Early experiments with GPT-4. https://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4/Cemri M, Çukur T, Koç A (2022) Unsupervised simplification of legal texts. arXiv preprint arXiv:2209.00557Chalkidis I, Androutsopoulos I, Michos A (2017) Extracting contract elements. In: Proceedings of the 16th edition of the international conference on articial intelligence and law, pp 19–28Chalkidis I, Fergadiotis M, Malakasiotis P, Aletras N, Androutsopoulos I (2020) Legal-bert: The muppets straight out of law school. In: Findings of the association for computational linguistics: EMNLP 2020, pp 2898–2904Chalkidis I, Fergadiotis M, Malakasiotis P, Androutsopoulos I (2019) Neural contract element extraction revisited. In: Workshop on document intelligence at NeurIPS 2019Chandrasekar R, Doran C, Bangalore S (1996) Motivations and methods for text simplification. In: COLING 1996 Volume 2: The 16th international conference on computational linguisticsChowdhery A, Narang S, Devlin J, Bosma M, Mishra G, Roberts A, Barham P, Chung HW, Sutton C, Gehrmann S et al (2023) Palm: Scaling language modeling with pathways. J Mach Learn Res 24(240):1–113 Google Scholar Christiano P.F., Leike J, Brown T, Martic M, Legg S, Amodei D (2017) Deep reinforcement learning from human preferences. Adv Neural Inf Process Syst 30Cripwell L, Legrand J, Gardent C (2023) Document-level planning for text simplification. In: 17th Conference of the European chapter of the association for computational linguistics, pp 993–1006. Association for Computational LinguisticsDale E, Chall JS (1948) A formula for predicting readability: Instructions. Educ Res Bull 37–54Devaraj A, Marshall I, Wallace B, Li JJ (2021) Paragraph-level simplification of medical texts. In: Toutanova K, Rumshisky A, Zettlemoyer L, Hakkani-Tur D, Beltagy I, Bethard S, Cotterell R, Chakraborty T, Zhou Y (eds) Proceedings of the 2021 conference of the North American chapter of the association for computational linguistics: Human language technologies, pp 4972–4984. Association for Computational Linguistics, Online. https://doi.org/10.18653/v1/2021.naacl-main.395Devaraj A, Wallace BC, Marshall IJ, Li JJ (2021) Paragraph-level simplification of medical texts. In: Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting, vol 2021, p 4972Devlin J, Chang M.-W., Lee K, Toutanova K (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: Human language technologies, Volume 1 (long and Short Papers), pp 4171–4186Dikmen I, Eken G, Erol H, Birgonul MT (2025) Automated construction contract analysis for risk and responsibility assessment using natural language processing and machine learning. Comput Ind 166:104251Article Google Scholar Fang D, Qiang J, Zhu Y, Yuan Y, Li W (2026) Liu Y Progressive document-level text simplification via large language models. N Gener Comput 44(1):4Article Google Scholar Fei Z, Shen X, Zhu D, Zhou F, Han Z, Huang A, Zhang S, Chen K, Yin Z, Shen Z et al (2024) Lawbench: Benchmarking legal knowledge of large language models. In: Proceedings of the 2024 conference on empirical methods in natural language processing, pp 7933–7962Feng Y, Qiang J, Li Y, Yuan Y, Zhu Y (2023) Sentence simplification via large language models. arXiv preprint arXiv:2302.11957Feng F, Yang Y, Cer D, Arivazhagan N, Wang W (2022) Language-agnostic bert sentence embedding. In: Proceedings of the 60th annual meeting of the association for computational linguistics (volume 1: Long Papers), pp 878–891Flesch R (1948) A new readability yardstick. J Appl Psychol 32(3):221Article Google Scholar Gao Y, Johnson K, Froehlich D, Carrer L, Ebling S (2025) Evaluating the effectiveness of direct preference optimization for personalizing german automatic text simplifications for persons with intellectual disabilities. arXiv preprint arXiv:2507.01479Garimella A, Sancheti A, Aggarwal V, Ganesh A, Chhaya N, Kambhatla N (2022) Text simplification for legal domain:\(\{\)i\(\}\) nsights and challenges. In: Proceedings of the natural legal language processing workshop 2022, pp 296–304Geis GS (2008) Automating contract law. NYUL Rev 83:450 Google Scholar Guerreiro NM, Rei R, Stigt Dv, Coheur L, Colombo P, Martins AF (2024) xcomet: Transparent machine translation evaluation through fine-grained error detection. Trans Assoc Comput Linguist 12:979–995Hanna M, Bojar O (2021) A fine-grained analysis of BERTScore. In: Barrault L, Bojar O, Bougares F, Chatterjee R, Costa-jussa MR, Federmann C, Fishel M, Fraser A, Freitag M, Graham Y, Grundkiewicz R, Guzman P, Haddow B, Huck M, Yepes AJ, Koehn P, Kocmi T, Martins A, Morishita M, Monz C (eds) Proceedings of the Sixth Conference on Machine Translation, pp. 507–517. Association for Computational Linguistics, Online. https://aclanthology.org/2021.wmt-1.59/Hochreiter S (1997) Schmidhuber J Long short-term memory. Neural Comput 9(8):1735–1780Hu EJ, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, Wang L, Chen W et al (2022) Lora: Low-rank adaptation of large language models. ICLR 1(2):3Jiang AQ, Sablayrolles A, Mensch A, Bamford C, Chaplot DS, Casas D, Bressand F, Lengyel G, Lample G, Saulnier L, Lavaud LR, Lachaux M-A, Stock P, Scao TL, Lavril T, Wang T, Lacroix T, Sayed WE (2023) Mistral 7B. arxiv:2310.06825Justo JM, Recario RNC (2024) Text simplification system for legal contract review. In: Arai K (ed) Advances in information and communication, pp 105–123. Springer, ChamKew T, Chi A, Vásquez-Rodríguez L, Agrawal S, Aumiller D, Alva-Manchego F, Shardlow M (2023) Bless: Benchmarking large language models on sentence simplification. In: Proceedings of the 2023 conference on empirical methods in natural language processing, pp 13291–13309Kincaid JP, Fishburne Jr RP, Rogers RL, Chissom BS (1975) Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical reportKoetsier M, Grefen P, Vonk J (2000) Contracts for cross-organizational workflow management. In: International conference on electronic commerce and web technologies, pp 110–121. SpringerKudo T, Richardson J (2018) Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In: Proceedings of the 2018 conference on empirical methods in natural language processing: system demonstrations, pp 66–71Lee J, Yeung CY (2018) Personalizing lexical simplification. In: Bender EM, Derczynski L, Isabelle P (eds) Proceedings of the 27th international conference on computational linguistics, pp 224–232. Association for Computational Linguistics, Santa Fe, New Mexico, USA. https://aclanthology.org/C18-1019/Levy M, Jacoby A, Goldberg Y (2024) Same task, more tokens: the impact of input length on the reasoning performance of large language models. In: Proceedings of the 62nd annual meeting of the association for computational linguistics (volume 1: Long Papers), pp 15339–15353Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, 7871–7880Li Z (2023) The dark side of chatgpt: Legal and ethical challenges from stochastic parrots and hallucination. arXiv preprint arXiv:2304.14347Liang X, Wang H, Wang Y, Song S, Yang J, Niu S, Hu J, Liu D, Yao S, Xiong F et al (2024) Controllable text generation for large language models: A survey. CoRRLi H, Dong Q, Chen J, Su H, Zhou Y, Ai Q, Ye Z, Liu Y (2024) Llms-as-judges: a comprehensive survey on llm-based evaluation methods. arXiv preprint arXiv:2412.05579Likert R (1932) A technique for the measurement of attitudes. Arch PsycholLin C-Y (2004) Rouge: A package for automatic evaluation of summaries. In: Text Summarization branches out, pp 74–81Li Z, Shardlow M, Hassan S (2022) An investigation into the effect of control tokens on text simplification. In: Proceedings of the workshop on text simplification, accessibility, and readability (TSAR-2022), pp 154–165Liu NF, Lin K, Hewitt J, Paranjape A, Bevilacqua M, Petroni F, Liang P (2024) Lost in the middle: How language models use long contexts. Trans Assoc Comput Linguist 12:157–173. https://doi.org/10.1162/tacl_a_00638Article Google Scholar Liu Q, Wang W, Willard J (2025) Effects of prompt length on domain-specific tasks for large language models. arXiv preprint arXiv:2502.14255Loshchilov I, Hutter F (2017) Decoupled weight decay regularizationLoukas L, Fergadiotis M, Chalkidis I, Spyropoulou E, Malakasiotis P, Androutsopoulos I, Paliouras G (2022) Finer: Financial numeric entity recognition for xbrl tagging. In: Proceedings of the 60th annual meeting of the association for computational linguistics (volume 1: Long Papers), pp 4419–4431Lu J, Li J, Wallace B, He Y, Pergola G (2023) NapSS: Paragraph-level medical text simplification via narrative prompting and sentence-matching summarization. In: Vlachos A, Augenstein I (eds) Findings of the association for computational linguistics: EACL 2023, pp 1079–1091. Association for Computational Linguistics, Dubrovnik, Croatia. https://doi.org/10.18653/v1/2023.findings-eacl.80Martin L, Fan A, La Clergerie EV, Bordes A, Sagot B (2021) Multilingual unsupervised sentence simplificationOndov B, Attal K (2022) Demner-Fushman D A survey of automated methods for biomedical text simplification. J Am Med Inform Assoc 29(11):1976–1988Article Google Scholar Paetzold GH (2017) Specia L A survey on lexical simplification. J Artif Int Res 60(1):549–593MathSciNet Google Scholar Papineni K, Roukos S, Ward T, Zhu W-J (2002) Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the association for computational linguistics, pp 311–318Pattnayak A, Ramkumar A, Khetarpaul S, Vuthoo K (2025) Lawmate: Leveraging domain-specific llms for the indian legal ecosystem. In: Asian conference on intelligent information and database systems, pp 188–201. SpringerPereira FV, Frazão A, Moreira VP (2024) Automatic text simplification for the legal domain in brazilian portuguese. In: Brazilian conference on intelligent systems, pp 31–45. SpringerPerković G, Drobnjak A, Botički I (2024) Hallucinations in llms: Understanding and addressing challenges. In: 2024 47th MIPRO ICT and electronics convention (MIPRO), pp 2084–2088. IEEEQiang J, Li Y, Zhu Y, Yuan Y, Shi Y, Wu X (2021) Lsbert: Lexical simplification based on bert. IEEE/ACM Trans Audio Speech Lang Proc 29:3064–3076. https://doi.org/10.1109/TASLP.2021.3111589Rafailov R, Sharma A, Mitchell E, Manning CD, Ermon S (2023) Finn C Direct preference optimization: Your language model is secretly a reward model. Adv Neural Inf Process Syst 36:53728–53741 Google Scholar Reuter A, Schwenkreis F (1995) Contracts - a low-level mechanism for building general-purpose workflow management-systems. IEEE Data Eng Bull 18(1):4–10 Google Scholar Saggion H (n.d.) Automatic Text Simplification, vol 32. SpringerScherrer N, Shi C, Feder A (2023) Blei D Evaluating the moral beliefs encoded in llms. Adv Neural Inf Process Syst 36:51778–51809 Google Scholar Sheang KC, Saggion H (2021) Controllable sentence simplification with a unified text-to-text transfer transformer. In: Belz A, Fan A, Reiter E, Sripada Y (eds) Proceedings of the 14th International Conference on Natural Language Generation, pp 341–352. bpublisherAssociation for Computational Linguistics, Aberdeen, Scotland, UK. https://doi.org/10.18653/v1/2021.inlg-1.38Shukla A, Bhattacharya P, Poddar S, Mukherjee R, Ghosh K, Goyal P, Ghosh S (2022) Legal case document summarization: Extractive and abstractive methods and their evaluation. In: Proceedings of the 2nd conference of the Asia-Pacific chapter of the association for computational linguistics and the 12th international joint conference on natural language processing (volume 1: Long Papers), pp 1048–1064Sulem E, Abend O, Rappoport A (2018) Simple and effective text simplification using semantic and neural methods. In: Proceedings of the 56th annual meeting of the association for computational linguistics (volume 1: Long Papers), pp 162–173Sun H, Gao R, Zhang P, Yang B, Wang R (2025) Enhancing machine translation with self-supervised preference data. In: Che W, Nabende J, Shutova E, Pilehvar MT (eds) Proceedings of the 63rd annual meeting of the association for computational linguistics (volume 1: Long Papers), pp 23916–23934. Association for Computational Linguistics, Vienna, Austria. https://doi.org/10.18653/v1/2025.acl-long.1165Sun R, Jin H, Wan X (2021) Document-level text simplification: Dataset, criteria and baseline. In: Proceedings of the 2021 conference on empirical methods in natural language processing, pp 7997–8013Team G (2024) Gemma. https://doi.org/10.34740/KAGGLE/M/3301Team Q (2025) Qwen3 Technical Report. arxiv:2505.09388Thilagavathy R, Chaudhari S, Rastogi JS (2024) Simplification and summarization of legal contracts. In: AIP Conference proceedings, vol 3075, p 020113. AIP Publishing LLCTouvron H, Lavril T, Izacard G, Martinet X, Lachaux M-A, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F et al (2023) Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971Vásquez-Rodríguez L, Shardlow M, Przybyła P, Ananiadou S (2023) Document-level text simplification with coherence evaluation. In: Proceedings of the second workshop on text simplification, accessibility and readability, pp 85–101Wubben S, Bosch A, Krahmer E (2012) Sentence simplification by monolingual machine translation. In: Li H, Lin C-Y, Osborne M, Lee GG, Park JC (eds) Proceedings of the 50th annual meeting of the association for computational linguistics (volume 1: Long Papers), pp 1015–1024. Association for Computational Linguistics, Jeju Island, Korea. https://aclanthology.org/P12-1107/Xu W, Callison-Burch C, Napoles C (2015) Problems in current text simplification research: New data can help. Trans Assoc Comput Linguist 3:283–297Article Google Scholar Xu W, Napoles C, Pavlick E, Chen Q (2016) Callison-Burch C Optimizing statistical machine translation for text simplification. Trans Assoc Comput Linguist 4:401–415Article Google Scholar Xu W, Napoles C, Pavlick E, Chen Q (2016) Callison-Burch C Optimizing statistical machine translation for text simplification, vol 4, pp 401–415. https://www.aclweb.org/anthology/Q16-1029Yang G, Chen J, Lin W, Byrne B (2024) Direct preference optimization for neural machine translation with minimum bayes risk decoding. In: Proceedings of the 2024 conference of the North American chapter of the association for computational linguistics: Human language technologies (volume 2: Short Papers), pp 391–398Zhang T, Kishore V, Wu F, Weinberger KQ, Artzi Y (2019) Bertscore: Evaluating text generation with bert. In: International conference on learning representationsZhang X, Lapata M (2017) Sentence simplification with deep reinforcement learning. In: Palmer M, Hwa R, Riedel S (eds) Proceedings of the 2017 conference on empirical methods in natural language processing, pp 584–594. Association for Computational Linguistics, Copenhagen, Denmark. https://doi.org/10.18653/v1/D17-1062Zhou W, Jiang YE, Wilcox E, Cotterell R, Sachan M (2023) Controlled text generation with natural language instructions. In: International conference on machine learning, pp 42602–42613. PMLRDownload referencesAuthor informationAuthors and AffiliationsLTRC, International Institute of Information Technology, Hyderabad, Hyderabad, IndiaHiranmai Sri Adibhatla, Ananya Mukherjee & Manish ShrivastavaAuthorsHiranmai Sri AdibhatlaView author publicationsSearch author on:PubMed Google ScholarAnanya MukherjeeView author publicationsSearch author on:PubMed Google ScholarManish ShrivastavaView author publicationsSearch author on:PubMed Google ScholarContributionsH.S. played a major role in data collection, conducting experiments and preparing the manuscript. A.M. helped in evaluation framework and manuscript editing. M.S. provided guidance and reviewed the manuscript.Corresponding authorCorrespondence to Hiranmai Sri Adibhatla.Ethics declarations Competing interests The authors declare no competing interests. Additional informationPublisher's NoteSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.A AppendixA Appendix1.1 A.1 Examples of simplification across gradesThis section contains Table 8, a collection of whole clauses and illustrates how simplification is performed across the three defined grades. Looking at the second example in detail, the original sentence “Executive agrees that Executive shall not directly or indirectly solicit an employee of the Company to terminate their employment relationship with the Company”, the core semantic intent is to prohibit attempts to induce an employee to leave the company. In the Basic version it is simplified to “You promise not to try to get an employee to quit their jobs. You won’t do it on your own or help someone else do it”, this intent is preserved through explicit phrasing such as “try to get an employee to quit” and “help someone else do it,” which together capture both the action and indirect solicitation aspects of the original clause.Table 8 Clause and Its simplified variantsFull size table1.2 A.2 Implementation detailsThis section outlines the implementation details of our approach. The models employed, along with their configurations, are summarized in Table 9, while the evaluation metrics and their corresponding signatures are presented in Table 10, ensuring transparency in model selection and reproducibility of results.Table 9 Model details and hyperparametersFull size tableTable 10 Signatures and source code details of automatic evaluation metricsFull size table1.3 A.3 Simplification settingsThe Table 11 details different context lengths (no context i.e paragraph-only, and paragraph with token-restricted context), variant types, simplification grades, and models, where each unique combination results in a generated simplificationTable 11 Combinatorial overview of simplification settings, showing context length, variant type, grade, and model for each generated simplificationFull size table1.4 A.4 PromptsPrompts designed to guide the models effectively are detailed here. Prompts help define the task clearly, ensuring that the model focuses on the intended aspects of the text. This section includes prompts corresponding to three levels of simplification (Figs. 5, and 6), allowing for graded outputs that vary in detail and complexity. Furthermore, it provides a dedicated prompt (Fig. 7 for multi-criteria evaluation, enabling the assessment of outputs along different dimensions such as readability, simplicity, and fidelity to the original text.Fig. 5Prompt template used for simplification of skilled and basic gradesFull size imageFig. 6Prompt template used for simplification of intermediate gradeFull size imageFig. 7Prompt for self-evaluation of simplified legal text across different grade levelsFull size imageRights and permissionsSpringer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.Reprints and permissionsAbout this articleCite this articleSri Adibhatla, H., Mukherjee, A. & Shrivastava, M. From jargon to clarity: bridging understanding through graded simplification of legal data. Artif Intell Law (2026). https://doi.org/10.1007/s10506-026-09503-yDownload citationReceived: 02 October 2025Accepted: 26 January 2026Published: 03 March 2026Version of record: 03 March 2026DOI: https://doi.org/10.1007/s10506-026-09503-yShare this articleAnyone you share the following link with will be able to read this content:Get shareable linkSorry, a shareable link is not currently available for this article.Copy shareable link to clipboard Provided by the Springer Nature SharedIt content-sharing initiative KeywordsText-simplificationLLMToken-efficientLegalPreference-optimization Profiles Hiranmai Sri Adibhatla View author profile Manish Shrivastava View author profile

来源:Springe发布时间:2026-03-03
Manufacturing Work Beyond Manufacturing Industries: A Reassessment of Structural Change

This paper studies the labor market impact of structural change by distinguishing between industry- and occupation-based measures of manufacturing and service employment. Using German data from 1975–2019, we find that 67% of manufacturing jobs lost in manufacturing industries are offset by new manufacturing jobs in service industries. Linking these aggregate patterns to worker-level outcomes, we show that the severity of displacement costs depends on the occupation–sector characteristics of the next job. Workers who retain manufacturing occupations in the service sector experience employment trajectories comparable to those remaining in manufacturing, indicating that structural change is less disruptive than commonly perceived. (joint with Dominik Boddin)

来源:DIW Berlin发布时间:
共73101条记录
  • 1
  • 2
  • 3
  • 4
  • 6092

产业专题

产业大脑平台

产业经济-监测、分析、

研判、预警

数智招商平台

找方向、找目标、管过程

产业数据库

产业链 200+

产业环节 10000+

产业数据 100亿+

企业数据库

工商 司法 专利

信用 风险 产品

招投标 投融资

报告撰写AI智能体

分钟级生成各类型报告