<?xml version="1.0" encoding="utf-8"?>

<feed xmlns="http://www.w3.org/2005/Atom">
<title>FID Recht - Recht und Technik</title>
<generator uri="http://tt-rss.org/">Tiny Tiny RSS/UNKNOWN (Unsupported, Git error)</generator>
<updated>2025-12-20T22:38:55+00:00</updated>
<id>https://vifa-recht.de/feed/19</id>
<link href="https://vifa-recht.de/feed/19" rel="self"/>

<link href="https://vifa-recht.de" rel="alternate"/>

<entry>
	<id>tag:vifa-recht.de,2026-04-09:/284971</id>
	<link href="https://www.gautrais.com/conferences/ccq-numerique-livre-10-du-droit-international-prive/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=ccq-numerique-livre-10-du-droit-international-prive" rel="alternate" type="text/html"/>
	<title type="html">CCQ + Numérique: Livre 10 &amp;#8211; Du droit international privé, A-3421 + Zoom(9 avril 2026)</title>
	<summary type="html"><![CDATA[<p>Cette conf&eacute;rence explore les mutations du&nbsp;Livre 10 (droit international priv&eacute;) du&nbsp;Code civil du Qu&eacute;b...</p>]]></summary>
	<content type="html"><![CDATA[<div dir="auto">Cette conf&eacute;rence explore les mutations du&nbsp;<b>Livre 10 (droit international priv&eacute;) du&nbsp;<em>Code civil du Qu&eacute;bec</em></b>&nbsp;face aux d&eacute;fis technologiques. Nos experts acad&eacute;mique et professionnels analyseront les &eacute;volutions et questionnements &agrave; l&rsquo;oeuvre.</div>
<div dir="auto"></div>
<div dir="auto">Venez &eacute;couter le professeur Harith Al-Dabbagh (Facult&eacute; de droit, Universit&eacute; de Montr&eacute;al) accompagn&eacute; de Me, Vicken Patanian (Patanian Law Firm) et du professeur Guillaume Lagani&egrave;re (D&eacute;partement de sciences juridiques de l&rsquo;UQAM) qui partageront leurs r&eacute;flexions sur ce sujet&nbsp;!</div>
<div dir="auto"></div>
<div dir="auto">&#128205; En personne &agrave; l&rsquo;Universit&eacute; de Montr&eacute;al (A-3421);</div>
<div dir="auto">&#128250; Diffusion en direct sur Zoom;</div>
<div dir="auto">&#128351; 17h00 | 1 heure 30 de formation continue reconnue</div>
<div dir="auto"></div>
<div dir="auto">&#128073; Inscription gratuite&nbsp;:&nbsp;<a href="https://fcdroit.umontreal.ca/Web/MyCatalog/ViewP?pid=OPWhgFdTt9fynJhm%2fIXQ4A%3d%3d&amp;id=5SPrK8RrPxi23WLR57jz%2bg%3d%3d&amp;cvState=cvDate=09-04-2026" rel="noopener noreferrer" target="_blank">ici&nbsp;!</a></div>]]></content>
	<updated>2026-04-09T17:14:24+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-04-09T17:14:24+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-04-08:/284889</id>
	<link href="https://law.stanford.edu/2026/04/08/when-claude-code-meets-apples-app-store/" rel="alternate" type="text/html"/>
	<title type="html">When Claude Code Meets Apple’s App Store </title>
	<summary type="html"><![CDATA[<p>Apple&rsquo;s App Store submission is one of the more demanding gatekeeping mechanisms in consumer s...</p>]]></summary>
	<content type="html"><![CDATA[<p>Apple&rsquo;s App Store submission is one of the more demanding gatekeeping mechanisms in consumer software. It requires accurate privacy disclosures, published security standards, measurable performance and accessibility thresholds, and design compliance reviewed by human reviewers. With Artificial General Intelligence (AGI) claims refusing to die, I decided to take a look at whether Claude Code would fit the bill and what would happen if I brought an app from Claude Code and introduced it to the App Store.</p>
<p>Claude Code moves fast at ideation, screen mapping, scaffolding, and boilerplate generation, and developers have shipped real apps this way. But at the back of the development life cycle, where compliance, privacy disclosure, security architecture, and App Store submission live, the human cost reasserts itself. Studies of AI-generated code indicate a significant share requires refactoring to meet Apple&rsquo;s accessibility and performance standards, and a meaningful fraction of AI-driven apps fail review due to privacy or design violations.</p>
<p>A little more than three years ago, I began developing the <a href="https://ailccp.replit.app" rel="noopener noreferrer" target="_blank">AI Life Cycle Core Principles (AILCCP)</a>. This is a framework&mdash;and now an app&mdash;that organizes AI development and deployment obligations across 37 principles, 10 development phases, and 48 controls, mapped to international standards and regulatory enforcement contexts. It gives developers, deployers, lawyers, and policymakers a shared vocabulary and methodology for assessing where an AI system meets its obligations and where it falls short across the development and deployment life cycle. I use it to granularly analyze things like AI legislation, policies, AI vendor agreements, AI governance documents, and questions such as whether Claude Code is AGI. The AILCCP contains 37 principles, each with multiple requirements. Three principles apply most directly here: Wherewithal, Human-Centered, and Workforce Compatible. For each, I focus on the requirements most relevant to what the App Store test exposes, then apply it to Claude Code.</p>
<p><b>Wherewithal</b> asks whether the capability matches what is being claimed. The enthusiasm around AI coding tools has generated claims that Claude Code can take a developer from idea to shipped app with minimal effort. That framing describes the front of the life cycle accurately and the back poorly, and developers who plan around it will discover the gap at exactly the point where it costs the most to close.</p>
<p><b>Human-Centered</b> requires human-in-the-loop oversight at the pre-deployment review and deployment phases. Those are the phases where an iOS app is tested against Apple&rsquo;s privacy guidelines, where data handling disclosures are drafted and verified, where security architecture is stress-tested, and where the submission package is assembled and submitted for review. Claude Code does not do those things independently. A developer who has moved quickly through scaffolding and code generation arrives at those phases with the tool&rsquo;s momentum behind them and its limitations fully exposed.</p>
<p><b>Workforce Compatible</b> asks whether an AI tool builds human capability or displaces it. A developer who uses Claude Code to generate iOS code throughout a project never learns iOS development. They learn to prompt. When the tool produces architecturally flawed code, which it does with some regularity, the developer has no independent basis for catching the error. They are dependent on the tool to identify problems that the tool created. That is not augmentation. It is a different kind of dependency, and it grows less visible the more the tool appears to be working.</p>
<p>Claude Code is powerful, without a doubt. But a phase-specific competence at a high level is not what &ldquo;G&rdquo; in AGI means.</p>]]></content>
	<updated>2026-04-08T14:32:14+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-04-08T14:32:14+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="agi"/>

	<category term="anthropic"/>

	<category term="artificial general intelligence"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-04-07:/284816</id>
	<link href="https://law.stanford.edu/2026/04/07/when-the-medium-becomes-the-message-and-the-message-becomes-irrelevant/" rel="alternate" type="text/html"/>
	<title type="html">When the Medium Becomes the Message and the Message Becomes Irrelevant</title>
	<summary type="html"><![CDATA[<p>A widely circulated image purportedly depicting one of the American airmen recently rescued by U.S. ...</p>]]></summary>
	<content type="html"><![CDATA[<p>A widely circulated image purportedly depicting one of the American airmen recently rescued by U.S. special forces from Iran drew millions of views this past week. It drew something else as well: a fact-check. The image, several accounts announced with evident satisfaction, is AI-generated. (Bravo, Inspector Clouseau.) Texas Governor Greg Abbott shared it. Major influencers amplified it. The fact-check lit up the reply threads.</p>
<p>I want to set aside the image and focus on the finger-pointer, because what the act of identification reveals is more interesting than the image itself.</p>
<p>Consider what was not contested. The airman&rsquo;s rescue happened. The emotion expressed by millions of people who engaged with the image was genuine. No one who shared it claimed it was a photograph taken by a photojournalist embedded with the rescue team. Most people who encountered it likely experienced it the way they experience a commemorative illustration, as a visual token for something that actually occurred.</p>
<p>Now consider a cartoon. Suppose someone had drawn the same scene, soldiers in a helicopter, smiling, American flag in hand, in the style of a tasteful editorial illustration, the fact-checkers would have had nothing to say. The drawing would have traveled the same emotional circuit. The soldiers would have been the same soldiers. The rescue would have been the same rescue. The difference between the cartoon and the AI-generated image is purely procedural. The AI image was generated by a statistical model trained on visual data. The cartoon was generated by a human hand trained on visual instruction. In both cases, no camera was present. In both cases, the image is a representation, not a document.</p>
<p>Yuval Noah Harari argued in <em>Sapiens</em> that the human capacity for shared fiction, for constructing and inhabiting stories that are not literally true in a documentary sense, is the source of civilizational cohesion. The story of a rescued soldier, expressed in an image that was never a photograph, is doing exactly this work. It is binding a community around a shared recognition of something that happened and matters. The finger-pointer, by flagging the image&rsquo;s generative provenance, is not adding epistemic content. The finger-pointer is asserting a procedural standard as a substitute for engaging with the story. The question &ldquo;is this AI-generated?&rdquo; has displaced the question &ldquo;is this meaningful?&rdquo; and the displacement is being performed as though it were a contribution to public discourse.</p>
<p>What the finger-pointer is actually doing is performing epistemic status. The detection requires no expertise, but deploying it produces the appearance of rigor: I saw through this, I identified the error, I am the one who knows. This is not fact-checking in any meaningful sense. Fact-checking interrogates claims and the claim here, that American airmen were rescued, is true. What is being fact-checked is the artwork.</p>
<p>This particular form of intervention will become self-obsolete. The precedent is already visible. When Photoshop entered the visual commons in the 1990s, &ldquo;it&rsquo;s been Photoshopped&rdquo; carried the same accusatory charge the AI flag carries today. The charge faded because it became universal. Sharpening, cropping, color grading, exposure correction, skin retouching, background removal, these are now understood as the ordinary conditions of professional image-making, not deviations from it. Nobody pauses before a magazine cover to announce that the photograph has been post-processed.</p>
<p>As generative models improve and AI-generated imagery saturates the visual commons, the identification will carry decreasing signal. When every image could be AI-generated and many will be, announcing that a specific image is AI-generated will produce the same information as announcing that a specific sentence was typed on a keyboard.</p>]]></content>
	<updated>2026-04-07T14:17:39+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-04-07T14:17:39+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="artificial intelligence"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-04-06:/284719</id>
	<link href="https://law.stanford.edu/2026/04/05/turning-ai-governance-into-operational-infrastructure/" rel="alternate" type="text/html"/>
	<title type="html">Turning AI Governance Into Operational Infrastructure</title>
	<summary type="html"><![CDATA[<p>I started building the AI Life Cycle Core Principles (AILCCP) framework in March 2023 because I foun...</p>]]></summary>
	<content type="html"><![CDATA[<p>I started building the AI Life Cycle Core Principles (AILCCP) framework in March 2023 because I found that terms like &ldquo;trustworthy,&rdquo; &ldquo;reliable,&rdquo; &ldquo;secure,&rdquo; &ldquo;safe,&rdquo; &ldquo;explainable,&rdquo; &ldquo;robust,&rdquo; and &ldquo;ethical&rdquo; were being used in AI governance with persistent, frustrating ambiguity. That ambiguity might look like flexibility, but it is not. It creates a definitional vacuum that destabilizes the ability of stakeholders to maintain a coherent conversation about what these principles mean and<span>&nbsp; </span>actually require. And when the principles themselves are imprecise, the laws, regulations, standards, and best practices that refer to them inherit that imprecision, and become less effective or entirely ineffective. My work making them more concrete exposed how many adjacent areas needed the same treatment, things like ownership, life cycle coverage, risk interdependencies, standards mapping. The result is the framework as it stands today, and the work is ongoing.<span>&nbsp;</span></p>
<p>With the release of the AILCCP Explorer, an <a href="https://ailccp.replit.app" rel="noopener noreferrer" target="_blank">interactive web application</a> that makes the full framework navigable and searchable, this felt like the right moment to revisit what the AILCCP is, how it works, and why it is built the way it is.<span>&nbsp;</span></p>
<p>The AILCCP is a structured knowledge graph that connects existing principles, controls, international standards, life cycle phases, and identified risks into a single navigable structure with over 500 explicit cross-references. The ambiguity is extinguished.</p>
<h4><b>The AI Governance Problem</b></h4>
<p>ISO/IEC 42001 addresses AI management systems. The NIST AI Risk Management Framework maps risk categories and profiles. IEEE has published standards addressing algorithmic bias, transparency, and system design. The EU AI Act imposes risk-based obligations with enforcement teeth.</p>
<p>But these instruments do not talk to each other. Anyone building, deploying, procuring, or auditing an AI system today must reconcile guidance from them, map their practices to regulatory expectations that often vary by jurisdiction, culture, and produce documentation that satisfies reviewers.</p>
<h4><b>What the AILCCP Is</b><b></b></h4>
<p>The AILCCP is a cross-linked knowledge base built from five components: principles, controls, standards, life cycle phases, and risks. Each one connects to the others through explicit, traceable links.</p>
<p>The framework is built on 37 principles, most of which were distilled from international consensus such as the OECD, UNESCO, G7, G20, and APAC. The AILCCP gives each one a defined scope, an objective, and measurable outcomes so that stakeholders working with different source standards are looking at the same thing. Governance follows an AI system from the first scoping decision through operational monitoring to eventual retirement.</p>
<h4><b>The Architecture</b></h4>
<p><b>37 Principles</b></p>
<p>Each principle includes a short definition, a detailed definition, an objective statement, key questions, suggested controls, required evidence artifacts, and identified stakeholders. The principles are organized across 15 categories and mapped to 10 pillars that span Oversight and Accountability, Reliability and Robustness, Transparency and Explainability, Ethics, Fairness and Equity, Privacy and Consent, Safety and Security, Human-Centered and Workforce concerns, Data and Process stewardship, and Organizational Capability.</p>
<p>Every principle includes a rationale explaining why it belongs in the framework. When stakeholders adapt the framework to their context, the rationale helps them decide which principles matter most for their system.</p>
<p><b>48 Controls</b></p>
<p>Controls are the &ldquo;how.&rdquo; Each is defined by name, domain, function, and rationale, and each maps to its top three principle alignments. Across the full set, this produces 187 control-to-principle links. Every one of the 48 connects to at least one principle, ensuring that implementation guidance always traces back to a governance commitment.</p>
<p>But here is the thing: controls that exist in isolation, disconnected from the principles they are meant to serve, tend to break down. When a control has no explicit link to a principle, stakeholders struggle to explain why they are implementing it, auditors have difficulty assessing whether it is sufficient, and the control becomes a compliance artifact rather than a governance mechanism.</p>
<p><b>43 International Standards</b></p>
<p>The framework maps 43 standards from IEEE, ISO/IEC, and NIST, each with a scope statement, summary, intended use, and identified primary users. Each standard maps to up to five principles, generating 215 standard-to-principle links that touch 29 of the 37 principles.</p>
<p>The 43 standards were selected because they are actionable and recognized across regulatory and audit contexts. Standards are increasingly taking on weight, legitimacy, and force, recognized by legislators, regulators, courts, and the broader developer and implementer ecosystem. When the question is &ldquo;show me the controls for data governance,&rdquo; the answer has to trace to standards that carry that weight.</p>
<p><b>10 Life Cycle Phases</b></p>
<p>The life cycle spans ten phases, from Scoping and Design through Decommissioning and Archiving. Each phase identifies default owners (Product, Legal, ML Engineering, SRE, and others), expected evidence artifacts, and measurable metrics. Across all ten phases, 84 phase-to-principle links map governance commitments to specific moments in the system&rsquo;s life.</p>
<p>Each link comes with a life cycle signal that includes a rationale explaining why that principle matters at that stage. Transparency, for example, means something different during Operations and Monitoring than it does during Scoping and Design.</p>
<p>Scoping and Design tracks requirements coverage percentage and reading level targets. Data Preparation tracks missing and invalid data rates, label agreement scores, and PII leakage tests. Evaluation and Red Teaming tracks bias delta, attack success rates, and coverage percentage. Operations and Monitoring tracks mean time to repair, drift alerts per month, and SLO attainment. Instead of &ldquo;monitor for bias,&rdquo; the framework says &ldquo;measure bias delta during Evaluation and Red Teaming and track drift alerts per month during Operations and Monitoring.&rdquo;</p>
<p><b>18 Identified Risks</b></p>
<p>The risk layer assesses 18 identified risks for severity and likelihood using a qualitative rubric tied to the five pillars. Seven are rated Very High severity, eight High, and three Medium. These risks generate 23 links to standards and touch 24 of the 37 principles, connecting the threat landscape directly to the controls and standards that address it.</p>
<p>As I see it, one of the more distinctive ideas in the framework is the &ldquo;enabling risk&rdquo; concept. The three risks rated Medium severity are transparency and explainability gaps that function as force multipliers for other, more serious harms. A system that lacks Explainability makes every other harm harder to detect, harder to diagnose, and harder to remediate. This layered thinking about risk cascades reflects how AI breakdowns actually propagate in practice.</p>
<p><b>The Cross-Link Network</b></p>
<p>In total, the framework contains over 500 explicit links. 187 control-to-principle. 215 standard-to-principle. 84 phase-to-principle. 23 risk-to-standard. Pick any entry point and trace a path to every other part of the framework.</p>
<h4><b>What Sets the AILCCP Apart</b></h4>
<p><b>Bidirectional Traceability</b></p>
<p>Most governance frameworks are organized top-down. The NIST AI RMF flows from four functions (GOVERN, MAP, MEASURE, MANAGE) down to categories and subcategories, but provides no built-in path from a risk finding back to the relevant activities and standards. ISO/IEC 42001 follows the Annex SL hierarchy common to ISO management standards, with 42 control objectives that trace from clauses downward, but the reverse mapping is left to the implementing organization. The OECD AI Principles offer five principles and five policy recommendations with no controls, no life cycle phases, and no risk mappings at all. In each case, the framework is organized in one direction.</p>
<p>A diligent team can reverse-engineer any of these frameworks. But the AILCCP builds the reverse paths in. Its 500+ explicit cross-references mean a user can start from a risk and trace to the standards and principles that mitigate it, start from a standard and see which principles it supports and which life cycle phases it touches, or start from a life cycle phase and see what should be measured, who owns it, and what evidence needs to be produced.</p>
<p>An auditor starts with a finding, a development team with a life cycle phase, a regulator with a risk. The graph accommodates all of them.</p>
<p><b>Ownership Built In</b></p>
<p>Every life cycle phase names default owners, required evidence artifacts, and measurable metrics. This turns governance from &ldquo;someone should handle this&rdquo; into &ldquo;here is who is responsible, here is what they produce, and here is how it gets measured.&rdquo;</p>
<p>The ownership model spans Product, UX, Legal, Risk, ML Engineering, Data Science, QA, Security, SRE, and Communications, because AI governance requires coordinated action across disciplines.</p>
<p><b>Designed for Audits</b></p>
<p>Because the AILCCP maps finalized, prescriptive standards, it produces references auditors and regulators recognize. When someone asks for evidence of data governance controls, the framework traces to a specific control, its rationale, the principles it implements, and the published standards that back it up. That is what &ldquo;audit-ready&rdquo; looks like in practice, a traceable chain from commitment to evidence.</p>
<p><b>Coverage Visibility</b></p>
<p>With 29 of 37 principles referenced by standards and 24 of 37 referenced by identified risks, the framework makes its own coverage gaps visible. Eight principles are not yet referenced by any mapped standard. Stakeholders can see at a glance which principles have strong standards backing and which need additional work.</p>
<p><b>Who the Framework Serves</b></p>
<p><b>Development teams</b> can use the 48 controls as a checklist during system design and code review, trace a specific risk back to the principles and controls that mitigate it, and identify which standards apply to a given feature or component.</p>
<p><b>Compliance and legal teams</b> can demonstrate alignment with the EU AI Act, ISO/IEC 42001, and other regulatory frameworks, prepare audit-ready documentation by mapping internal practices to published standards, and build a defensible governance narrative for regulators.</p>
<p><b>Risk and audit professionals</b> can use the severity and likelihood rubric to prioritize assessments, trace risks to specific life cycle phases to focus audit scope, and cross-reference internal risk registers against the AILCCP&rsquo;s identified risks.</p>
<p><b>Regulators and policy advisors</b> can use the framework to understand how international standards map to practical governance actions, and evaluate organizational compliance claims against a structured benchmark.</p>
<p><b>Executives and board members</b> can get a strategic view of governance coverage across the five pillars without requiring technical depth, using the framework as a common language between technical teams and leadership.</p>
<p>A small team can use the controls as a lightweight development checklist. A large enterprise can use the full cross-linked structure to build audit documentation, assign ownership across departments, and track metrics at every life cycle phase.</p>
<h4><b>The AILCCP Explorer</b></h4>
<p>The framework is delivered as an interactive, searchable web application called the AILCCP Explorer. The Explorer provides multi-directional navigation. Start from any entity type and trace connections across the knowledge graph. Filter by pillar, phase, risk severity, or standard body. And the Export Library feature enables offline analysis and audit preparation.</p>
<p>The risk assessment methodology is built into the interface with inline explanations, so stakeholders can understand why a risk carries the severity rating it does without consulting a separate document.</p>
<h4><b>Governance as Infrastructure</b></h4>
<p>The AILCCP started with a simple observation: the vocabulary of AI governance was too ambiguous to be operative. Three years later, that initial effort to define terms with precision has grown into a knowledge graph of 37 principles, 48 controls, 43 international standards, 10 life cycle phases, and 18 identified risks, all connected through over 500 explicit cross-references. The framework assigns ownership, specifies measurable metrics at each phase, and traces every control back to the principles and standards it serves. It works in every direction, so that an auditor entering through a finding, a development team starting at a life cycle phase, a compliance officer mapping to regulatory expectations, a board member looking for coverage across pillars, and a regulator focused on a risk are all navigating the same structure.</p>
<p>The work continues and the AILCCP Explorer AI governance<span>&nbsp;</span>accessible.</p>
<p>Explore the <a href="https://ailccp.replit.app" rel="noopener noreferrer" target="_blank">tool</a>. Try it. And tell me what works and what doesn&rsquo;t.</p>]]></content>
	<updated>2026-04-06T02:13:07+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-04-06T02:13:07+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="ailccp"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-30:/284165</id>
	<link href="https://www.gautrais.com/blogue/2026/03/29/preuve-ccq-numerique/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=preuve-ccq-numerique" rel="alternate" type="text/html"/>
	<title type="html">Preuve + CCQ + Numérique</title>
	<summary type="html"><![CDATA[<p>Il est rare que je le fasse, en pr&eacute;parant mes notes pour la conf&eacute;rence qui aura lieu cet apr&egrave;s-midi ...</p>]]></summary>
	<content type="html"><![CDATA[<p><a href="https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve.png" rel="noopener noreferrer" target="_blank"><img fetchpriority="high" decoding="async" src="https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve-475x671.png" alt="" srcset="https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve-475x671.png 475w,https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve-768x1084.png 768w,https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve-725x1023.png 725w,https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve.png 860w,https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve-475x671.png 475w,https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve-768x1084.png 768w,https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve-725x1023.png 725w,https://www.gautrais.com/files/sites/185/2026/01/CCQPreuve.png 860w" sizes="(max-width: 269px) 100vw, 269px" referrerpolicy="no-referrer" loading="lazy"></a>Il est rare que je le fasse, en pr&eacute;parant mes notes pour la conf&eacute;rence qui aura lieu cet apr&egrave;s-midi &agrave; la Facult&eacute; de droit sur <a href="https://www.gautrais.com/?post_type=talk&amp;p=6091" rel="noopener noreferrer" target="_blank">CCQ + Num&eacute;rique: Livre 7 &ndash; De la preuve</a>, je me suis dit qu&rsquo;il serait sans doute pertinent de rendre publiques quelques notes relativement &agrave; cette activit&eacute;. Un propos qui forc&eacute;ment devra s&rsquo;ins&eacute;rer dans ceux de mes brillants coll&egrave;gues qui m&rsquo;ont fait le plaisir de m&rsquo;accompagner dans cette t&acirc;che de mieux percevoir ces dispositions modifi&eacute;es il y a longtemps, 2001, par la fameuse <a href="https://canlii.ca/t/6fptp" rel="noopener noreferrer" target="_blank">Loi concernant le cadre juridique des technologies de l&rsquo;information</a> (ci-apr&egrave;s LCCJTI), un texte abscon, certes, &agrave; la facture pour le moins d&eacute;rangeante pour le commun des juristes, mais qui n&rsquo;est pas sans attraits.</p>
<h4>1. Coll&egrave;gues</h4>
<p>D&rsquo;abord, voici quelques mots sur mes brillants coll&egrave;gues que je pr&eacute;sente tr&egrave;s sommairement par ordre alphab&eacute;tique:</p>
<ul>
<li>Me. <a href="https://www.blakes.com/fr-ca/equipe/tous-les-professionnels/claude-marseille-ad-e/" rel="noopener noreferrer" target="_blank"><strong>Claude Marseille</strong></a> est associ&eacute; chez <strong>Blakes</strong>, un praticien bien connu en litige, mais un praticien qui de surcro&icirc;t &eacute;crit et pense en profondeur le droit de la preuve civile.</li>
<li>Me. <a href="https://www.fasken.com/fr/soleica-monnier" rel="noopener noreferrer" target="_blank"><strong>Soleica Monnier</strong></a> est quant &agrave; elle avocate chez <strong>Fasken</strong>. J&rsquo;ai le privil&egrave;ge de la connaitre depuis longtemps, car elle fut &eacute;tudiante ici &agrave; L&rsquo;UdeM mais aussi car elle a travaill&eacute; pour le MJQ a une &eacute;poque o&ugrave; ce dernier avait une volont&eacute; de r&eacute;&eacute;valuer la LCCJTI pr&egrave;s de 20 ans apr&egrave;s son adoption. (Malheureusement, Soleica a du s&rsquo;absenter pour des raisons de sant&eacute;)</li>
<li>Professeur <a href="https://www.uottawa.ca/faculte-droit/droit-civil/corps-professoral/panaccio-charles-maxime" rel="noopener noreferrer" target="_blank"><strong>Charles-Maxime Panaccio</strong></a>, enseigne &agrave; l&rsquo;<strong>Universit&eacute; d&rsquo;Ottawa,</strong> Facult&eacute; de droit, Section de droit civil,&nbsp; et il est notamment coauteur avec L&eacute;o Ducharme d&rsquo;un incontournable en droit de la preuve dont la derni&egrave;re &eacute;dition (4i&egrave;me) date de 2010 (<a href="https://app.caij.qc.ca/fr/doctrine/publications/wilson-et-lafleur-livres/5" rel="noopener noreferrer" target="_blank">Administration de la preuve</a>) (en ligne sur le site du CAIJ).</li>
</ul>
<h4>2. Diff&eacute;rences de vues</h4>
<p>En fait, relativement &agrave; la LCCJTI, je m&rsquo;&eacute;tais commis <a href="https://www.gautrais.com/publications/etude-juridique-sur-la-loi-concernant-le-cadre-juridique-des-technologies-de-linformation-rlrq-c-c-1-1/" rel="noopener noreferrer" target="_blank">en 2020 suite &agrave; une &eacute;tude financ&eacute;e par le MJQ &agrave; proposer des changements</a> &agrave; ce texte en proposant 36 recommandations. 36 recommandations qui se sont bas&eacute;es sur un sondage o&ugrave; une petite centaine de personnes avait cru bon de r&eacute;pondre &agrave; la quarantaine de questions alors pos&eacute;es. Suite &agrave; cela, le MJQ avait mis en place un <a href="https://www.gautrais.com/blogue/2020/12/18/etude-sur-la-reforme-de-la-lccjti-lancement-du-comite-de-travail-sur-lapplication-de-cette-loi/" rel="noopener noreferrer" target="_blank">comit&eacute; <em>ad hoc</em> en 2021</a> o&ugrave; des pistes de changements avaient &eacute;t&eacute; envisag&eacute;s. Cette avenue, depuis 5 ans, semble avoir &eacute;t&eacute; tablett&eacute;e. Depuis m&ecirc;me, un article important du <a href="https://ssl.editionsthemis.com/revue/article-5057-ab-ovo-des-lrorigine-la-loi-concernant-le-cadre-juridique-des-technologies-de-lrinformation-les-documents-technologiques-et-le-cadre-conceptuel-de-la-preuve-judiciaire.html" rel="noopener noreferrer" target="_blank">professeur Panaccio publi&eacute; en 2023 dans le RJT (57-1)</a> est venu reconsid&eacute;rer la donne: plut&ocirc;t que de &laquo;&nbsp;patcher&nbsp;&raquo;, il fait recommencer au d&eacute;but (approche <em>ab ovo</em>). Une position drastique avec laquelle nous sommes frontalement en d&eacute;saccord. Ce d&eacute;saccord structurel fait &eacute;cho avec de nombreux &eacute;changes notamment avec Me Marseille qui lui aussi s&rsquo;accorde mal avec cette loi. Qu&rsquo;importe! La loi tend &agrave; polariser les sp&eacute;cialistes et les g&eacute;n&eacute;ralistes de la preuve. C&rsquo;est un de ses d&eacute;fauts; parmi d&rsquo;autres.</p>
<p>Apr&egrave;s, il importe de confronter les points de vue; il importe de s&rsquo;&eacute;couter comme le propose cette conf&eacute;rence. Il y a longtemps, autour des ann&eacute;es 2010 je crois, le CRDP avait organis&eacute; une incroyable conf&eacute;rence o&ugrave; Claude Fabien pr&eacute;sentait une perspective tr&egrave;s antinomique avec celle de Claude Marseille. Plus exactement, Professeur Fabien consid&eacute;rait qu&rsquo;il fallait permettre davantage les t&eacute;moignages &eacute;crits (par le biais de l&rsquo;article 2832 CCQ) afin de limiter les t&eacute;moignages judiciaires. Pour le premier, il s&rsquo;agissait de rendre la justice plus &laquo;&nbsp;efficace&nbsp;&raquo; <a href="https://www.chairejlb.ca/publications/melanges-baudouin/" rel="noopener noreferrer" target="_blank">(vous pourrez trouver son excellent texte (Le ou&iuml;-dire revisit&eacute;) dans les M&eacute;langes Jean-Louis Baudouin)</a>; pour le second, au contraire, les t&eacute;moignages devant le juge sont les meilleurs moyens de faire survenir la v&eacute;rit&eacute;.</p>
<h4>3. Questions</h4>
<p>Aussi, avec les 3 conf&eacute;renciers pr&eacute;cit&eacute;s, nous avons convenu dans le 90 minutes dont nous disposons de traiter des 4 sujets suivants: 4 sujets qui seront pr&eacute;sent&eacute;s par un conf&eacute;renciers en 4 mns et qui donneront lieu &agrave; d&eacute;bat avec les 3 autres.</p>
<p><strong>3.1</strong> &ndash; Int&eacute;grit&eacute; + authenticit&eacute;: <strong>Me. Claude Marseille</strong> initiera le d&eacute;bat sur ce point.</p>
<p><strong>3.2</strong> &ndash; Copie / Transfert: <strong>Me. Soleica Monnier</strong> &eacute;voquera cette dichotomie introduite dans la LCCJTI.</p>
<p><strong>3.3</strong> &ndash; Comment qualifier un document d&rsquo;&eacute;crit, d&rsquo;&eacute;l&eacute;ment mat&eacute;riel ou de t&eacute;moignage? J&rsquo;aurais le privil&egrave;ge de pr&eacute;senter, notamment l&rsquo;arr&ecirc;t <a href="https://www.canlii.org/fr/qc/qcca/doc/2018/2018qcca608/2018qcca608.html" rel="noopener noreferrer" target="_blank">Benisty c. Kloda</a>.</p>
<p><strong>3.4</strong> &ndash; Face &agrave; la LCCJTI, que faire: chambouler ou modifier? <strong>Pr. Charles-Maxime Panaccio</strong> lancera le d&eacute;bat, en se fondant certainement sur son <a href="https://ssl.editionsthemis.com/revue/article-5057-ab-ovo-des-lrorigine-la-loi-concernant-le-cadre-juridique-des-technologies-de-lrinformation-les-documents-technologiques-et-le-cadre-conceptuel-de-la-preuve-judiciaire.html" rel="noopener noreferrer" target="_blank">article</a>.</p>
<h4>4. R&Eacute;PONSES</h4>
<p>Dans un format blogue, je m&rsquo;autorise &agrave; donner des bribes de r&eacute;ponses &agrave; ces quatre questions passionnantes:</p>
<p><strong>4.1</strong> &ndash; La notion d&rsquo;int&eacute;grit&eacute; qui est omnipr&eacute;sente dans la LCCJTI ne remet pas en cause la notion, centrale en mati&egrave;re de preuve, d&rsquo;authenticit&eacute;. Simplement, et sans doute avec maladresse, ce texte n&rsquo;a pas voulu toucher au lien avec l&rsquo;auteur qui est d&eacute;j&agrave; pr&eacute;vu dans le CCQ (sous chacun des &eacute;l&eacute;ments de preuve). La premi&egrave;re ne se substitue donc pas &agrave; la seconde; elle est seulement une sous-partie de la seconde. Cette authenticit&eacute; est donc d&eacute;terminante pour tous les documents, qu&rsquo;ils soient des &eacute;crits, des t&eacute;moignages ou des &eacute;l&eacute;ments mat&eacute;riels. Peut-&ecirc;tre, et afin d&rsquo;&eacute;viter le doute, il eut &eacute;t&eacute; pertinent de rappeler cet aspect, <a href="https://www.gautrais.com/publications/etude-juridique-sur-la-loi-concernant-le-cadre-juridique-des-technologies-de-linformation-rlrq-c-c-1-1/" rel="noopener noreferrer" target="_blank">comme sugg&eacute;r&eacute; dans l&rsquo;&eacute;tude pr&eacute;cit&eacute;e de 2020</a>, la n&eacute;cessit&eacute; des deux composantes. Id&eacute;alement, il aurait &eacute;t&eacute; judicieux de le faire ailleurs que dans les &eacute;crits technologiques, justement parce que cela ne touche pas uniquement les &eacute;crits. La proposition que nous avions faite se pr&eacute;sente ainsi:</p>
<blockquote><p>2811.3 (Option 2)&nbsp;: Tout &eacute;l&eacute;ment de preuve, quel que soit son support, devra &ecirc;tre en mesure de montrer son authenticit&eacute;, &agrave; savoir son int&eacute;grit&eacute; et l&rsquo;auteur qui en est &agrave; l&rsquo;origine.</p></blockquote>
<p>Toujours sur l&rsquo;authenticit&eacute;, je ne peux pas ne pas citer l&rsquo;excellent blogue r&eacute;dig&eacute; par <a href="https://www.chairelrwilson.ca/?p=4138" rel="noopener noreferrer" target="_blank"><strong>Jinzhe Tan</strong></a> (&eacute;tudiant au doctorat &agrave; la Facult&eacute; de droit de l&rsquo;UdeM) lors d&rsquo;un r&eacute;cent concours de blogues organis&eacute; par la Chaire LR Wilson o&ugrave; il s&rsquo;est vu attribuer la premi&egrave;re place. Il pr&eacute;sente en effet une &eacute;tude qu&rsquo;il a lui-m&ecirc;me men&eacute; selon laquelle il est parfois difficile de reconnaitre si une image a &eacute;t&eacute; g&eacute;n&eacute;r&eacute;e par une IA ou non. Dans son cas pratique, o&ugrave; 200 photos &laquo;&nbsp;originales&nbsp;&raquo; ont donn&eacute; lieu &agrave; 1200 photos trafiqu&eacute;es, les 17 humains impliqu&eacute;s ont reconnu dans environ 70% des cas; quant aux IA, les r&eacute;sultats &eacute;taient tr&egrave;s variables, allant de 44% &agrave; 90% selon l&rsquo;outil utilis&eacute;.</p>
<p><strong>4.2</strong> &ndash; Je crains que l&rsquo;on ne puisse unifier compl&egrave;tement les deux formes de reproduction que sont la copie et le transfert. Simplement, <a href="https://www.gautrais.com/publications/etude-juridique-sur-la-loi-concernant-le-cadre-juridique-des-technologies-de-linformation-rlrq-c-c-1-1/" rel="noopener noreferrer" target="_blank">et comme propos&eacute; dans l&rsquo;&eacute;tude pr&eacute;cit&eacute;e</a>, il me semble que l&rsquo;on devrait distinguer la reproduction qui &laquo;&nbsp;multiplie&nbsp;&raquo; (la copie, dont l&rsquo;&eacute;tymologie latine signifie abondance (<em>copia</em>)) et le transfert qui consiste &agrave; se substituer au document &laquo;&nbsp;original&nbsp;&raquo;. Le transfert substitutif, comme on le trouve d&eacute;sormais dans la Loi sur le notariat qui a &eacute;t&eacute; modifi&eacute; en 2023, exige en effet un crit&egrave;re de perp&eacute;tuation qui ne peut &ecirc;tre satisfait avec une copie. Il faut en effet documenter un transfert, alors que la copie exige &laquo;&nbsp;seulement&nbsp;&raquo; d&rsquo;&ecirc;tre fid&egrave;le &agrave; l&rsquo;original.</p>
<p><strong>4.3</strong> &ndash; En troisi&egrave;me lieu, en mati&egrave;re de preuve num&eacute;rique, il importe de lire <a href="https://www.canlii.org/fr/qc/qcca/doc/2018/2018qcca608/2018qcca608.html" rel="noopener noreferrer" target="_blank">Benisty c. Kloda</a> (2018 QCCA 608) qui de fa&ccedil;on tr&egrave;s syst&eacute;matique d&eacute;crit les 5 &eacute;tapes d&rsquo;analyse de la preuve &agrave; savoir</p>
<ol>
<li>quel &eacute;l&eacute;ment de preuve est l&rsquo;enregistrement?</li>
<li>L&rsquo;enregistrement est-il un document technologique?</li>
<li>Faut-il une preuve d&rsquo;authenticit&eacute; tel que pr&eacute;vu &agrave; 2855 CCQ?</li>
<li>Quels sont les crit&egrave;res d&rsquo;authenticit&eacute;?</li>
<li>Modalit&eacute;s de contestation selon 262 CPC?</li>
</ol>
<p>Nous limiterons au premier point: la Cour d&rsquo;appel va en effet consid&eacute;rer que l&rsquo;enregistrement est un &eacute;l&eacute;ment mat&eacute;riel sur la base de sa fonction &agrave; savoir &laquo;&nbsp;permettre au juge de faire ses propres constatations&nbsp;&raquo;.</p>
<div>
<blockquote>
<p><a href="https://www.canlii.org/fr/qc/qcca/doc/2018/2018qcca608/2018qcca608.html" rel="noopener noreferrer" target="_blank">[56]&nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;<strong>Un enregistrement audio peut &ecirc;tre un &eacute;l&eacute;ment mat&eacute;riel de preuve ou un t&eacute;moignage. Cette qualification d&eacute;pend alors de la fonction de l&rsquo;enregistrement.</strong></a></p>
</blockquote>
</div>
<blockquote>
<div>
<p><a href="https://www.canlii.org/fr/qc/qcca/doc/2018/2018qcca608/2018qcca608.html" rel="noopener noreferrer" target="_blank">[57]&nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;Si le contenu de l&rsquo;enregistrement est la d&eacute;claration d&rsquo;une personne sur des faits pass&eacute;s dont elle a eu personnellement connaissance, il s&rsquo;agit d&rsquo;un t&eacute;moignage (2843&nbsp;<i>C.c.Q.</i>).</a></p>
</div>
<div>
<p><a href="https://www.canlii.org/fr/qc/qcca/doc/2018/2018qcca608/2018qcca608.html" rel="noopener noreferrer" target="_blank">[58]&nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;Pour que cette d&eacute;claration extrajudiciaire soit admise en preuve, elle doit d&rsquo;abord r&eacute;pondre aux r&egrave;gles pr&eacute;vues par les&nbsp;articles 2869&nbsp;&agrave;&nbsp;2874&nbsp;<i>C.c.Q.</i>&nbsp;Compte tenu des dispositions de l&rsquo;article 2874&nbsp;<i>C.c.Q.</i>, son authenticit&eacute; doit aussi &ecirc;tre d&eacute;montr&eacute;e. Sauf exception, cet enregistrement valant t&eacute;moignage ne peut pas non plus servir &agrave; prouver un acte juridique ou un &eacute;crit (2860 &agrave; 2862&nbsp;<i>C.c.Q.</i>) ni contredire un acte juridique constat&eacute; par &eacute;crit (2863&nbsp;<i>C.c.Q.</i>).</a></p>
</div>
</blockquote>
<div>
<blockquote>
<p><a href="https://www.canlii.org/fr/qc/qcca/doc/2018/2018qcca608/2018qcca608.html" rel="noopener noreferrer" target="_blank">[59]&nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;Si le contenu de l&rsquo;enregistrement permet plut&ocirc;t au tribunal de constater un fait document&eacute; par une personne &agrave; un moment pr&eacute;cis, il s&rsquo;agit d&rsquo;un &eacute;l&eacute;ment mat&eacute;riel de preuve (2854 <i>C.c.Q.</i>). <strong>Ainsi, lorsque l&rsquo;enregistrement capte un fait contemporain ou sur le vif, s&rsquo;il s&rsquo;agira d&rsquo;un &eacute;l&eacute;ment mat&eacute;riel.</strong> (les nots de bas de pages ont &eacute;t&eacute; &ocirc;t&eacute;es &ndash; notre mise en exergue)</a></p>
</blockquote>
</div>
<p><img decoding="async" src="https://www.gautrais.com/files/sites/185/2026/03/Capture-decran-le-2026-03-30-a-16.23.39-475x359.png" alt="" srcset="https://www.gautrais.com/files/sites/185/2026/03/Capture-decran-le-2026-03-30-a-16.23.39-475x359.png 475w,https://www.gautrais.com/files/sites/185/2026/03/Capture-decran-le-2026-03-30-a-16.23.39.png 638w,https://www.gautrais.com/files/sites/185/2026/03/Capture-decran-le-2026-03-30-a-16.23.39-475x359.png 475w,https://www.gautrais.com/files/sites/185/2026/03/Capture-decran-le-2026-03-30-a-16.23.39.png 638w" sizes="(max-width: 299px) 100vw, 299px" referrerpolicy="no-referrer" loading="lazy"></p>
<p>&nbsp;</p>
<p>La distinction entre un &eacute;crit, un t&eacute;moignage et un &eacute;l&eacute;ment mat&eacute;riel d&eacute;pend donc au rapport au temps:</p>
<ul>
<li>&Eacute;CRIT = m&eacute;moriser pour le <strong>futur</strong> (&eacute;crit instrumentaire)</li>
<li>T&Eacute;MOIGNAGE = relater des faits <strong>pass&eacute;s</strong></li>
<li>&Eacute;L&Eacute;MENT MAT&Eacute;RIEL = faire &eacute;tat d&rsquo;un moment &laquo;&nbsp;T&nbsp;&raquo; (<strong>fait contemporain</strong>)</li>
</ul>
<p>&nbsp;</p>
<p><strong>4.3</strong> &ndash; En dernier lieu, contrairement &agrave; la <a href="https://ssl.editionsthemis.com/revue/article-5057-ab-ovo-des-lrorigine-la-loi-concernant-le-cadre-juridique-des-technologies-de-lrinformation-les-documents-technologiques-et-le-cadre-conceptuel-de-la-preuve-judiciaire.html" rel="noopener noreferrer" target="_blank">position du professeur Panaccio</a>, je pense qu&rsquo;il faudrait davantage modifier des articles du CCQ que de la LCCJTI, l&rsquo;arrimage entre la LCCJTI et le CCQ ayant en effet &eacute;t&eacute; mal op&eacute;r&eacute;. Notamment, il me semble que 4-5 dispositions devraient minimalement &ecirc;tre introduites sous l&rsquo;article 2811 CCQ (et non apr&egrave;s 2837 CCQ dans une section sur l&rsquo;&eacute;crit). En plus d&rsquo;enlever certaines dispositions; notamment le tr&egrave;s controvers&eacute; <a href="https://canlii.ca/t/19b8#art7" rel="noopener noreferrer" target="_blank">article 7 LCCJTI</a>.</p>
<p>&nbsp;</p>
<h1><strong>Vid&eacute;o de la conf&eacute;rence</strong></h1>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>]]></content>
	<updated>2026-03-29T17:13:40+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-29T17:13:40+00:00</updated>
		<title>Vincent Gautrais</title></source>

	<category term="événements"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-30:/284137</id>
	<link href="https://law.stanford.edu/2026/03/30/architectural-negligence-what-the-meta-verdicts-mean-for-openai-in-the-nippon-life-case/" rel="alternate" type="text/html"/>
	<title type="html">Architectural Negligence: What the Meta Verdicts Mean for OpenAI in the Nippon Life Case</title>
	<summary type="html"><![CDATA[<p>We saw two verdicts in two days. State of New Mexico v. Meta Platforms, Inc., decided March 24, 2026...</p>]]></summary>
	<content type="html"><![CDATA[<p>We saw two verdicts in two days. <i>State of New Mexico v. Meta Platforms, Inc.</i>, decided March 24, 2026, found Meta liable under New Mexico&rsquo;s Unfair Practices Act for misleading consumers about platform safety and endangering children, and ordered $375 million in civil penalties. The following day, a California jury in <i>K.G.M. v. Meta Platforms, Inc. &amp; YouTube LLC</i> found Meta and YouTube negligent in the design and operation of their platforms, concluding that design features caused addiction and mental health harms and awarded $6 million, half of it punitive. Together, they can be considered the Rosetta Stone for <i>Nippon Life Insurance Co. v. OpenAI</i>, which I wrote about <a href="https://law.stanford.edu/2026/03/07/designed-to-cross-why-nippon-life-v-openai-is-a-product-liability-case/" rel="noopener noreferrer" target="_blank">here</a> and the legal setup in all three cases is identical. What varies is the domain of harm. <i>Meta</i> dealt with child safety. <i>Nippon Life</i> deals with the unauthorized practice of law (UPL). The litigation strategy used in in the March 2026 cases is the same that Nippon Life will likely make in Illinois, and it is the same strategy that will likely be used in every licensed profession plaintiff that AI has in its crosshairs.</p>
<p><b>The Design vs. Content Pivot</b><b></b></p>
<p>Section 230 of the Communications Decency Act functions as an immunity, not an affirmative defense and tech companies typically invoke it in a motion to dismiss to stop litigation before discovery begins. Meta raised arguments in both the New Mexico and California proceedings consistent with Section 230&rsquo;s traditional content-immunity framing, arguing it was a passive conduit for third-party generated content and therefore immune from liability for what that content did. But the courts in both proceedings allowed design-based and consumer protection claims to proceed. That did not immediately resolve the cases, but it opened the door to discovery, and discovery is where the cases were won.</p>
<p>With that door opened, the New Mexico jury was able to see internal Meta documents and evidence uncovered through the NM AG&rsquo;s investigation, including Operation MetaPhile, employee warnings that had been disregarded, and evidence the AG argued showed Meta had deliberately designed its platforms to addict young users and connect them with predators. The California jury saw the same architecture of corporate knowledge and deliberate design choice and responded with punitive damages. Neither jury was deciding whether Meta was responsible for what some predator posted. Both were deciding whether Meta architected the loop that made the harm foreseeable, systematic, and profitable.</p>
<p>Section 230 arguments will be raised in <i>Nippon Life</i>, but the Meta litigation suggests they will face the same limiting analysis. And OpenAI&rsquo;s own System Card, the published disclosure documenting its safety architecture, alignment choices, and residual risk assessments, creates a contradiction that OpenAI cannot easily resolve. When a company publishes a detailed account of how it shapes, filters, and aligns its model&rsquo;s outputs, it has staked out a position that is difficult to reconcile with a neutrality claim. While a defense attorney will argue that Section 230 and the System Card are complementary, one functioning as a legal shield, the other as a failure-to-warn mitigation, the response to that framing is going to be that what matters is not what the company disclosed, but what the company built.</p>
<p><b>This was all Foreseeable</b><b></b></p>
<p>OpenAI&rsquo;s knowledge of its models&rsquo; failure modes is already public. It published research explaining why language models hallucinate, documenting the frequency with which models generate false information with high expressed confidence. Their technical literature on RLHF describes a training methodology that rewards outputs users rate positively, which in practice creates incentives toward outputs that sound authoritative and agreeable, independent of whether they are accurate. And a Stanford University study led by Myra Cheng, <a href="https://arxiv.org/abs/2510.01395" rel="noopener noreferrer" target="_blank">Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence</a>, found widespread social sycophancy across production LLMs, including OpenAI&rsquo;s, concluding that model training rewards agreement as well as accuracy.</p>
<p>Roman Yampolskiy, a computer science and engineering professor and AI safety researcher argues in <i>AI: Unexplainable, Unpredictable, Uncontrollable</i> that LLM developers operate in a state of deep ignorance regarding the internal logic of their own systems. They understand the architecture but have almost no visibility into the reasoning behind any specific output, and that certain safety guarantees are mathematically unreachable for systems of this complexity. If Yampolskiy is correct, the developer cannot claim those failures were unpredictable.</p>
<p><b>The Defective Feature</b><b></b></p>
<p>Product liability doctrine requires the plaintiff to identify a specific, articulable defect. In the Meta litigation, the defects were the infinite scroll, variable-reward notification timing, suppressed engagement signals and algorithmic. Each was an engineering choice that could have been made differently, and this moved the cases from editorial neutrality into product liability territory.</p>
<p>The analogous defect in <i>Nippon Life</i> is the absence of refusal architecture. In my January 2012 <a href="https://law.stanford.edu/2012/01/14/computational-law-applications-unauthorized-practice-law/" rel="noopener noreferrer" target="_blank">Computational Law Applications and the Unauthorized Practice of Law</a> post, I introduced the concept of the uncrossable threshold (UT), a design principle that separates the provision of legal information from UPL. ChatGPT crossed the UT the moment it told Dela Torre that her attorney&rsquo;s advice was wrong.</p>
<p><b>What Follows</b><b></b></p>
<p><i>Nippon Life</i> is lining up to be the first major case to apply the architectural negligence logic of Meta to the domain of unlicensed professional practice. And it will not be the last. If juries in New Mexico and California can hold a technology company liable for designing a system it knew would harm children, a court in Illinois might very well hold a technology company liable for designing a system it knew would practice law and harm not only the end user, but the defendant, the court, the taxpayer, etc. And if this finding can happen in law, it can happen in medicine, finance, and other professional license domains in which AI models are unlawfully used.</p>]]></content>
	<updated>2026-03-30T13:48:45+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-30T13:48:45+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="artificial intelligence"/>

	<category term="eran kahana"/>

	<category term="llm liability"/>

	<category term="rlhf"/>

	<category term="unauthorized practice of law"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-30:/284119</id>
	<link href="https://law.stanford.edu/2026/03/30/who-owns-digital-thoughts-the-limits-of-property-law-and-the-2025-unesco-recommendation-on-the-ethics-of-neurotechnology/" rel="alternate" type="text/html"/>
	<title type="html">Who Owns Digital Thoughts? The Limits of Property Law and the 2025 UNESCO Recommendation on the Ethics of Neurotechnology</title>
	<summary type="html"><![CDATA[<p>The rapid advancement of Brain&ndash;Computer Interfaces (BCIs) and artificial intelligence (AI) in neurot...</p>]]></summary>
	<content type="html"><![CDATA[<p>The rapid advancement of Brain&ndash;Computer Interfaces (BCIs) and artificial intelligence (AI) in neurotechnology has moved beyond speculative science and clinical experimentation into commercial and regulatory relevance.[1] Advances in neural sensing and AI now permit systems capable of translating patterns of brain activity into text or other communicative outputs, and in some cases enabling users to control digital systems or physical devices through neural signals. As these technologies increasingly migrate into consumer-facing and workplace settings, they generate novel forms of data: neural signals and the probabilistic inferences derived from them.</p>
<p>As algorithms analyze data from neural activity to generate inferences about cognitive and affective states, a foundational legal question emerges: how should law conceptualize and regulate information that reveals, or purports to reveal, the contents of the human mind? For many years, U.S. data governance has relied heavily on notice-and-consent architectures embedded in privacy statutes and consumer protection law. While American privacy law is not reducible to a pure property regime, it often treats personal data as an object of exchange subject to disclosure and contractual allocation.[2] Whether that structure is adequate for neural data is increasingly contested.</p>
<h3><strong>I. The Limits of Property-Adjacent Privacy Frameworks</strong></h3>
<p>American privacy law&mdash;including statutes such as the California Privacy Rights Act (CPRA)&mdash;reflects a hybrid structure combining consumer protection, informational privacy, and market-based consent mechanisms.[3] Under this framework, data processing is generally permissible provided that firms disclose their practices and individuals are afforded certain forms of consumer choice, including the ability to consent to specific uses of sensitive data, opt out of data sales or sharing, and exercise statutory rights such as access or deletion. Scholars and regulators, however, have long questioned whether digital consent models function as meaningful exercises of autonomy.[4]</p>
<p>The concern is amplified in the neurotechnology context. Users of consumer EEG devices or neuro-adaptive systems may lack the technical capacity to understand how raw neural signals can be transformed into predictive or probabilistic inferences about emotion, attention, or preference.[5] AI systems do not merely collect neural signals; they generate inferential profiles that may have legal or economic consequences.[6] Existing privacy statutes often regulate collection and sharing, but they provide limited procedural mechanisms for contesting algorithmic inferences as such. As Brandon Garrett has argued in the broader AI context, procedural due process principles become salient when automated systems generate determinations that materially affect individuals without meaningful opportunities for explanation or challenge.[7]</p>
<p>A second concern relates to commodification. Conceptualizing neural data primarily as a transferable asset risks normalizing its exchange as a condition of employment, insurance, or service access. Property concepts can be analytically useful in structuring entitlements, but they may insufficiently capture the qualitative distinction between commercial data and information that reveals&mdash;or enables inference about&mdash;an individual&rsquo;s mental life.[8] Where regulation implicates the architecture of cognition itself, dignity and autonomy concerns arise that are not easily reduced to market exchange models.</p>
<p>These critiques do not imply that privacy statutes are irrelevant. Rather, they suggest that additional normative frameworks may be required when technologies directly implicate freedom of thought and mental integrity.</p>
<h3><strong>II. The Human Rights and &ldquo;Neurorights&rdquo; Framework</strong></h3>
<p>In response to these concerns, legal scholars and bioethicists have proposed the development or clarification of &ldquo;neurorights&rdquo;&mdash;interpretations of existing human rights principles tailored to neurotechnological contexts. Marcello Ienca and Roberto Andorno have argued that traditional rights to privacy and bodily integrity may require doctrinal refinement where technologies can access or modulate neural processes.[9]</p>
<p>Central to this discussion is the concept of cognitive liberty, sometimes described as mental self-determination.[10] As articulated by scholars, cognitive liberty encompasses the right to control one&rsquo;s mental processes and to be free from non-consensual intrusion or manipulation. It also implies that individuals should not be subjected to coercive &ldquo;neuro-surveillance&rdquo; or compelled disclosure of cognitive information absent compelling justification.[11]</p>
<p>Related principles include mental privacy and mental integrity. Mental privacy would protect individuals against unauthorized extraction or decoding of neural data.[12] Mental integrity extends established protections against physical interference to technologically mediated interventions that alter or influence cognitive states. Rather than framing the problem primarily in terms of ownership, this approach emphasizes the protection of autonomy, dignity, and freedom of thought.</p>
<p>At the same time, human rights framing is not self-executing. International human rights instruments often operate at a high level of abstraction and depend upon domestic implementation. Without legislative incorporation and enforcement mechanisms, rights-based language may remain aspirational.[13] The analytical question is therefore not whether to invoke human rights, but how to operationalize them within domestic legal systems and translate broadly articulated norms into locally intelligible legal and institutional practices.[14]</p>
<h3><strong>III. The 2025 UNESCO Recommendation: Normative Significance and Limits</strong></h3>
<p>In November 2025, UNESCO adopted the Recommendation on the Ethics of Neurotechnology.[15] As a Recommendation, the instrument does not create binding treaty obligations under international law. It does, however, articulate a normative framework endorsed by UNESCO member states concerning the governance of brain&ndash;computer interfaces and neural data.</p>
<p>The Recommendation situates neurotechnology within a human rights framework, emphasizing human dignity, freedom of thought, mental privacy, and autonomy. It calls upon states to adopt appropriate legal and regulatory measures to prevent harmful uses, including applications that facilitate coercive control, unlawful surveillance, or manipulation. It also highlights the risks associated with deploying neurotechnology in employment and commercial contexts where power asymmetries may undermine meaningful consent.</p>
<p>The Recommendation does not impose enforceable prohibitions. Rather, its significance lies in establishing a shared normative baseline and encouraging domestic reform. The instrument also emphasizes the importance of informed consent in the collection and use of neural data. At the same time, this emphasis highlights a tension identified earlier in the context of notice-and-consent privacy models: consent-based governance models may be insufficient where technologies generate probabilistic inferences about mental states that individuals may not fully understand or control. It reflects an emerging international consensus that neural data warrants treatment beyond ordinary consumer information.</p>
<h3><strong>IV. Conclusion and Policy Implications</strong></h3>
<p>The governance of neurotechnology raises structural questions about the adequacy of existing privacy frameworks. While U.S. consumer privacy statutes in some states provide important tools, they may not fully address technologies that generate inferences about mental states.</p>
<p>A defensible reform agenda would not require abandoning current statutory structures but supplementing them. Legislatures could explicitly classify neural data and derived cognitive inferences as highly sensitive information subject to heightened safeguards. Several U.S. states, including California, Colorado, Montana, and Connecticut, have already begun experimenting with this approach by classifying neural data as sensitive personal information under state privacy statutes, while no comparable federal framework currently exists.[16] They could restrict conditioning employment or essential services on the disclosure of neural information. They could also require meaningful transparency, explainability, and contestability where AI systems draw inferences about cognitive or affective states with material consequences.</p>
<p>The core claim is not that neural data can never be conceptualized within property or privacy frameworks. Rather, it is that legal systems should resist reducing neural information to an ordinary market commodity. Where regulation touches the integrity of mental life, doctrines of autonomy, dignity, and freedom of thought must play a central role.</p>
<h3><strong>References</strong></h3>
<p>[1] See Nita A. Farahany, <em>The Battle for Your Brain</em> (2023) (discussing emerging neurotechnology and its societal implications).</p>
<p>[2] Jane R. Bambauer,&nbsp;<em>How to Get the Property Out of Privacy Law</em>, 133 Yale L.J. F. 1087 (2024).</p>
<p>[3] Cheryl Saniuk-Heinig,&nbsp;<em>Private Rights of Action in US Privacy Legislation</em>, IAPP (June 10, 2024), <a href="https://iapp.org/resources/article/private-rights-of-action-us-privacy-legislation" rel="noopener noreferrer" target="_blank"><br>
https://iapp.org/resources/article/private-rights-of-action-us-privacy-legislation</a>.</p>
<p>[4] Lauren Henry Scholz,&nbsp;<em>The Illusion of Consent: Rethinking Privacy Online</em>, Ga. St. U. L. Rev. (2025),<br>
<a href="https://www.gsulawreview.org/blog/the-illusion-of-consent-rethinking-privacy-online/" rel="noopener noreferrer" target="_blank">https://www.gsulawreview.org/blog/the-illusion-of-consent-rethinking-privacy-online/</a>.</p>
<p>[5] <em>See </em>Farahany,<em> supra</em> note 1.</p>
<p>[6] See Brandon L. Garrett, <em>Artificial Intelligence and Procedural Due Process</em>, 27 U. Pa. J. Const. L. 933 (2025).</p>
<p>[7] <em>Id.</em></p>
<p>[8] Talya Deibel,&nbsp;<em>Private Law and the Inner Self: Comparative Perspectives on the Governance of Neurotechnology</em>, 14 Glob. J. Comp. L. 105 (2025).</p>
<p>[9] Marcello Ienca &amp; Roberto Andorno,&nbsp;<em>Towards New Human Rights in the Age of Neuroscience and Neurotechnology</em>, 19 Life Sci., Soc&rsquo;y &amp; Pol&rsquo;y 5 (2017).</p>
<p>[10] Jan-Christoph Bublitz,&nbsp;<em>&ldquo;My Mind Is Mine!?&rdquo;: Cognitive Liberty as a Legal Concept</em>, in&nbsp;<em>Cognitive Enhancement</em>&nbsp;233 (Elisabeth Hildt &amp; Andreas G. Franke eds., 2013).</p>
<p>[11] Council of Europe,&nbsp;<em>CDBIO Report on Neurotechnologies</em>&nbsp;(2021),&nbsp;<a href="https://rm.coe.int/round-table-report-en/1680a969ed." rel="noopener noreferrer" target="_blank">https://rm.coe.int/round-table-report-en/1680a969ed.</a></p>
<p>[12] <em>See</em>, Ienca &amp; Andorno, <em>supra</em> note 9.</p>
<p>[13] UNESCO,&nbsp;<em>Recommendation on the Ethics of Neurotechnology</em>, U.N. Doc. SHS/BIO/REC-NEURO/2025 (Nov. 2025); U.N. Human Rights Council,&nbsp;<em>Report of the Special Rapporteur on the Right to Privacy</em>, U.N. Doc. A/HRC/58/6 (2025).</p>
<p>[14] Sally Engle Merry, <em>Human Rights and Gender Violence: Translating International Law into Local Justice</em> (Univ. Chicago Press 2005) (describing the process of &ldquo;vernacularization,&rdquo; through which international human rights norms are translated and adapted into local legal and cultural contexts).</p>
<p>[15] <em>See</em> UNESCO, <em>supra</em> note 13.</p>
<p>[16] See Cal. Civ. Code &sect; 1798.140 (West 2025) (classifying neural data as sensitive personal information under the CCPA, as amended by SB 1223); Colo. Rev. Stat. &sect; 6-1-1303(4)(b) (2024) (including neural data within &ldquo;biological data,&rdquo; a sensitive data category under the CPA); <em>see also</em> Mont. Code Ann. &sect; 50-46-102(11) (2025) (defining &ldquo;neurotechnology data&rdquo;); Conn. Gen. Stat. &sect; 42-515(23) (2026) (defining neural data from central nervous system activity).</p>]]></content>
	<updated>2026-03-30T15:01:36+00:00</updated>
	<author><name>Bo Hyoung Lee</name></author>
	<source>
		<id>https://law.stanford.edu/blog/lawandbiosciences/</id>
		<link rel="self" href="https://law.stanford.edu/blog/lawandbiosciences/"/>
		<updated>2026-03-30T15:01:36+00:00</updated>
		<title>Law and Biosciences Blog - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="brain-computer interface"/>

	<category term="data commodification"/>

	<category term="freedom of thought"/>

	<category term="international human rights"/>

	<category term="mental integrity"/>

	<category term="neuroscience"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-30:/284115</id>
	<link href="https://www.gautrais.com/conferences/ccq-numerique-livre-7-de-la-preuve/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=ccq-numerique-livre-7-de-la-preuve" rel="alternate" type="text/html"/>
	<title type="html">CCQ + Numérique: Livre 7 + De la preuve, Salon François Chevrette + Zoom(30 mars 2026)</title>
	<summary type="html"><![CDATA[<p>Pavillon Maximilien-Caron,&nbsp;3101, chemin de la Tour&nbsp;,&nbsp;3e&nbsp;&eacute;tage, a-3464&nbsp;&ndash;&nbsp;Salon&nbsp;Fran&ccedil;ois-Chevre...</p>]]></summary>
	<content type="html"><![CDATA[<div>
<div><span>Pavillon Maximilien-Caron,&nbsp;</span><span><a href="https://maps.google.ca/maps?q=3101,%20chemin%20de%20la%20Tour+Montr%C3%A9al&amp;hl=fr&amp;ie=UTF8&amp;sll=45.498658,-73.616933&amp;t=m&amp;z=17&amp;vpsrc=0" rel="noopener noreferrer" target="_blank">3101, chemin de la Tour&nbsp;</a>,&nbsp;<span>3e&nbsp;&eacute;tage, a-3464&nbsp;&ndash;&nbsp;Salon&nbsp;Fran&ccedil;ois-Chevrette</span><br>
Montr&eacute;al (QC)&nbsp;&nbsp;H3T 1J7</span></div>
<div><span><a href="http://fcdroit.umontreal.ca/Web/MyCatalog/ViewP?id=lnSFfMOQGIwxe%2f9iYP1JIg%3d%3d&amp;pid=OPWhgFdTt9fynJhm%2fIXQ4A%3d%3d" rel="noopener noreferrer" target="_blank">Assister en ligne &ndash; Heure de Montr&eacute;al</a></span></div>
<div></div>
</div>
<div>
<h2>Description</h2>
<hr>
<div>
<p>Cette activit&eacute; vise &agrave; observer les interactions entre les diff&eacute;rents livres du&nbsp;<i>Code civil du Qu&eacute;bec&nbsp;</i>et les nouvelles technologies et du num&eacute;rique.</p>
<p>La pr&eacute;sente s&rsquo;int&eacute;resse particuli&egrave;rement au Livre 7 de la preuve ainsi que la&nbsp;<i>Loi concernant le cadre juridique des technologies de l&rsquo;information.&nbsp;</i></p>
</div>
<h2>Conf&eacute;rencier.e.s</h2>
<hr>
<div>
<ul>
<li>Vincent Gautrais (UdeM)</li>
<li>Claude Marseille (Blakes)</li>
<li>Soleica Monnier (Fasken)</li>
<li>Charles-Maxime Pannaccio (UOttawa)</li>
</ul>
<p>Pour s&rsquo;inscrire, <a href="https://calendrier.umontreal.ca/activite/ccq-numerique-livre-7-de-la-preuve" rel="noopener noreferrer" target="_blank"><strong>cliquer sur le lien ici&nbsp;!&nbsp;</strong></a></p>
<h2>Vid&eacute;o de la conf&eacute;rence</h2>
<hr>
<p>&nbsp;</p>
<div>
<p>&nbsp;</p>
</div>
</div>
</div>]]></content>
	<updated>2026-03-30T12:54:49+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-30T12:54:49+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-27:/283837</id>
	<link href="https://www.gautrais.com/blogue/2026/03/27/quen-est-il-des-donnees-de-vos-comptes-fidelite-le-rappel-de-la-depersonnalisation-des-donnees-par-le-cpvp-decision-loblaw/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=quen-est-il-des-donnees-de-vos-comptes-fidelite-le-rappel-de-la-depersonnalisation-des-donnees-par-le-cpvp-decision-loblaw" rel="alternate" type="text/html"/>
	<title type="html">Qu’en est-il des données de vos comptes fidélité&amp;#160;? Le rappel de la dépersonnalisation des données par le CPVP (décision Loblaw)</title>
	<summary type="html"><![CDATA[<p>Lola Gregorowius est &eacute;tudiante dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)&nbsp;...</p>]]></summary>
	<content type="html"><![CDATA[<p><strong><a href="https://www.gautrais.com/files/sites/185/2026/01/LolaG.jpg" rel="noopener noreferrer" target="_blank"><img decoding="async" src="https://www.gautrais.com/files/sites/185/2026/01/LolaG-475x610.jpg" alt="" srcset="https://www.gautrais.com/files/sites/185/2026/01/LolaG-475x610.jpg 475w,https://www.gautrais.com/files/sites/185/2026/01/LolaG-725x931.jpg 725w,https://www.gautrais.com/files/sites/185/2026/01/LolaG.jpg 739w,https://www.gautrais.com/files/sites/185/2026/01/LolaG-475x610.jpg 475w,https://www.gautrais.com/files/sites/185/2026/01/LolaG-725x931.jpg 725w,https://www.gautrais.com/files/sites/185/2026/01/LolaG.jpg 739w" sizes="(max-width: 193px) 100vw, 193px" referrerpolicy="no-referrer" loading="lazy"></a>Lola Gregorowius est &eacute;tudiante dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)&nbsp;&nbsp;</strong></p>
<p>Nous avons d&eacute;j&agrave; tous adh&eacute;r&eacute; &agrave; un compte fid&eacute;lit&eacute; d&rsquo;un magasin&nbsp;; ils permettent de gagner des points qui peuvent se traduire par des rabais ou encore des offres commerciales. On finit souvent par en accumuler une grande quantit&eacute;. Mais avez-vous d&eacute;j&agrave; essay&eacute; de supprimer un de ces comptes fid&eacute;lit&eacute;&nbsp;? C&rsquo;est ce qu&rsquo;ont tent&eacute; de faire les d&eacute;tenteurs d&rsquo;un compte fid&eacute;lit&eacute; (aussi appel&eacute; PC Optium) offert par les compagnies Loblaw. Certains se sont heurt&eacute;s &agrave; des difficult&eacute;s techniques et ont donc contact&eacute; le service client&egrave;le du magasin&nbsp;; mais aucune r&eacute;ponse. Apr&egrave;s un certain temps d&rsquo;attente, des plaintes ont &eacute;t&eacute; d&eacute;pos&eacute;es aupr&egrave;s du Commissariat &agrave; la protection de la vie priv&eacute;e (CPVP) afin de signaler un potentiel abus.</p>
<p>Presque 2 ann&eacute;es d&rsquo;enqu&ecirc;te plus tard, le Commissariat a rendu ses <a href="https://www.priv.gc.ca/fr/mesures-et-decisions-prises-par-le-commissariat/enquetes/enquetes-visant-les-entreprises/2026/lprpde-2026-001/" rel="noopener noreferrer" target="_blank">conclusions </a>le 5 mars 2026 qui r&eacute;v&egrave;le <strong>deux manquements &agrave; la LPRPDE</strong>.</p>
<h4><em>Quid </em>: Les compagnies Loblaw</h4>
<p><em>Compagnies Loblaw limit&eacute;e</em> (Loblaw) est une entreprise de grande distribution canadienne dans les domaines de l&rsquo;alimentation et du pharmaceutique cr&eacute;&eacute;e en 1919. Avec plus de 2 400 magasins dispers&eacute;s dans toutes les provinces canadiennes, Loblaw occupe une grande place du march&eacute; dans le domaine de la distribution.</p>
<h4><em>Quid </em>: Commissariat &agrave; la protection de la vie priv&eacute;e</h4>
<p>Le Commissariat &agrave; la protection de la vie priv&eacute;e (CPVP) est l&rsquo;organisme f&eacute;d&eacute;ral qui est charg&eacute; de veiller &agrave; l&rsquo;application des deux lois f&eacute;d&eacute;rales concernant la protection des donn&eacute;es personnelles&nbsp;: la <em>Loi sur la protection des renseignements personnels </em>et la <em>Loi sur la protection des renseignements personnels et les documents &eacute;lectroniques</em> (LPRPDE). Directement affili&eacute; au Parlement f&eacute;d&eacute;ral canadien, il m&egrave;ne des enqu&ecirc;tes si des plaintes lui sont soumises et effectue des recherches et v&eacute;rifications dans le domaine de la protection des renseignements personnels.</p>
<h2>1. Le contexte</h2>
<p>Comme nous l&rsquo;avons vu pr&eacute;c&eacute;demment, les soci&eacute;t&eacute;s Loblaw sont tr&egrave;s pr&eacute;sentes sur le territoire canadien et elles comptent environ 18 millions de membres PC Optium. Toutefois, en raison de l&rsquo;augmentation des prix des produits alimentaires, Loblaw a fait l&rsquo;objet d&rsquo;un <a href="https://toronto.citynews.ca/2024/06/02/as-month-long-boycott-of-loblaws-ends-what-effect-has-it-had-on-grocery-store-giant/" rel="noopener noreferrer" target="_blank">mouvement de boycottage</a> lanc&eacute; au mois de mai 2024. C&rsquo;est &agrave; ce moment que le CPVP a commenc&eacute; &agrave; recevoir des plaintes de clients qui n&rsquo;arrivaient pas &agrave; supprimer leur compte PC Optium. L&rsquo;enqu&ecirc;te s&rsquo;est donc tourn&eacute;e vers deux questions principales&nbsp;: Est-ce que Loblaw donne suite aux plaintes en mati&egrave;re de protection de la vie priv&eacute;e d&eacute;pos&eacute;es par les int&eacute;ress&eacute;s&nbsp;?&nbsp; Est que Loblaw s&rsquo;assure de ne pas conserver des renseignements personnels aussi longtemps que n&eacute;cessaire lorsqu&rsquo;un membre supprime son compte PC Optimum&nbsp;?</p>
<h2>2. Deux manquements fond&eacute;s</h2>
<h3>a) L&rsquo;obligation de fournir un organisme pour recevoir les plaintes</h3>
<p>Selon l&rsquo;<a href="https://laws-lois.justice.gc.ca/fra/lois/p-8.6/page-7.html" rel="noopener noreferrer" target="_blank">article 4.10 de la LPRPDE</a>, toute personne doit pouvoir se plaindre du non-respect des principes pr&eacute;vus par la loi f&eacute;d&eacute;rale &agrave; l&rsquo;organisation concern&eacute;e qui doit prendre les mesures appropri&eacute;es afin de r&eacute;soudre le probl&egrave;me.</p>
<p>Pour rappel, la compagnie Loblaw n&rsquo;a pas r&eacute;pondu &agrave; certaines plaintes des utilisateurs qui souhaitaient supprimer leur compte fid&eacute;lit&eacute;. La soci&eacute;t&eacute; Loblaw a tent&eacute; de se justifier eu &eacute;gard &agrave; l&rsquo;important flux de demande dont elle devait s&rsquo;occuper &agrave; cette p&eacute;riode. Cet argument a &eacute;t&eacute; rejet&eacute; par le Commissariat qui a consid&eacute;r&eacute; qu&rsquo;au vu du <strong>d&eacute;lai d&eacute;raisonnable</strong> qui lui a fallu, les compagnies Loblaw ont contrevenu &agrave; l&rsquo;article 4.10. Ainsi, il consid&egrave;re que pour respecter la loi f&eacute;d&eacute;rale, il faut pouvoir traiter les demandes dans un certain temps. On remarque que ce d&eacute;lai reste &agrave; l&rsquo;appr&eacute;ciation discr&eacute;tionnaire du Commissariat, qui ne donne aucune indication quant &agrave; son application.</p>
<h3>b) L&rsquo;obligation de conserver des donn&eacute;es &agrave; des fins d&eacute;termin&eacute;es pas plus longtemps que n&eacute;cessaire</h3>
<p>Selon l&rsquo;<a href="https://laws-lois.justice.gc.ca/fra/lois/p-8.6/page-7.html" rel="noopener noreferrer" target="_blank">article 4.5 de la LPRPDE</a>, il existe une limite &agrave; la conservation des renseignements personnels. Celle-ci doit poursuivre un objectif d&eacute;termin&eacute; et connu. Ensuite, l&rsquo;article explique ce qu&rsquo;il faudrait faire des donn&eacute;es dont on a plus besoin, c&rsquo;est-&agrave;-dire, les d&eacute;truire, les effacer ou les d&eacute;personnaliser.</p>
<p>Comme l&rsquo;expliquent les conclusions du Commissariat, la soci&eacute;t&eacute; Loblaw conserve certaines donn&eacute;es apr&egrave;s la suppression du compte PC Optium&nbsp;:</p>
<blockquote><p>&ldquo;<em>Loblaw a confirm&eacute; que, lorsqu&rsquo;un membre supprime son compte PC Optimum en ligne, ses coordonn&eacute;es sont supprim&eacute;es et remplac&eacute;es par une adresse courriel fictive, mais que Loblaw conserve les donn&eacute;es historiques relatives aux transactions, les donn&eacute;es du programme de fid&eacute;lit&eacute; et les donn&eacute;es sur l&rsquo;utilisation</em>&rdquo;</p></blockquote>
<p>L&rsquo;entreprise a donc opt&eacute; pour la d&eacute;personnalisation des renseignements personnels dont elle a la charge. Le <a href="https://www.parl.ca/documentviewer/fr/44-1/projet-loi/C-27/premiere-lecture?col=2" rel="noopener noreferrer" target="_blank">projet de loi C-27</a> d&eacute;finit le terme <strong>d&eacute;personnaliser </strong>comme le fait de&nbsp;:</p>
<blockquote><p>&ldquo;<em>modifier des renseignements personnels afin de r&eacute;duire le risque, sans pour autant l&rsquo;&eacute;liminer, qu&rsquo;un individu puisse &ecirc;tre identifi&eacute; directement</em>.&rdquo;</p></blockquote>
<p>Au-del&agrave; de cette d&eacute;finition, le CPVP pose un nouvel avis en ce qui concerne la d&eacute;personnalisation des renseignements personnels. Lorsqu&rsquo;une entreprise choisit d&rsquo;opter pour cette m&eacute;thode, elle doit le faire de &ldquo;<strong>mani&egrave;re continue</strong>&rdquo; et donc prendre en consid&eacute;ration les nouvelles technologies qui pourraient permettre la r&eacute;identification des personnes. Ainsi, le Commissariat inscrit cet avis dans le contexte plus large d&rsquo;un monde en perp&eacute;tuelle &eacute;volution technologique.</p>
<p>Les compagnies Loblaw, en faisant ce choix, ont donc une responsabilit&eacute; importante qui lui incombe de v&eacute;rifier que la d&eacute;personnalisation des donn&eacute;es est op&eacute;rationnelle. Le CPVP a d&rsquo;ailleurs rappel&eacute; les diff&eacute;rents facteurs qui permettent d&rsquo;&eacute;valuer les risques de r&eacute;identification qu&rsquo;il a &eacute;nonc&eacute; dans une <a href="https://www.priv.gc.ca/fr/mesures-et-decisions-prises-par-le-commissariat/enquetes/enquetes-visant-les-institutions-federales/2022-23/pa_20230529_aspc/" rel="noopener noreferrer" target="_blank">Enqu&ecirc;te </a><a href="https://www.priv.gc.ca/fr/mesures-et-decisions-prises-par-le-commissariat/enquetes/enquetes-visant-les-institutions-federales/2022-23/pa_20230529_aspc/" rel="noopener noreferrer" target="_blank">sur la collecte et l&rsquo;utilisation de donn&eacute;es d&eacute;personnalis&eacute;es sur la mobilit&eacute; dans le cadre de la pand&eacute;mie de COVID-19.</a></p>
<p>Sa conclusion est donc la suivante&nbsp;:</p>
<blockquote><p>&ldquo;<em>nous estimons que le simple retrait des noms, des num&eacute;ros de t&eacute;l&eacute;phone et des adresses courriel des comptes ne suffit pas pour permettre &agrave; Loblaw de d&eacute;montrer que les donn&eacute;es qu&rsquo;elle conserve sont d&eacute;personnalis&eacute;es</em>.&rdquo; Au vu de son enqu&ecirc;te, le Commissariat a consid&eacute;r&eacute; que l&rsquo;entreprise ne d&eacute;personnalisait pas suffisamment les donn&eacute;es de ses clients.</p></blockquote>
<h3>c) La confusion de Loblaw dans sa politique en mati&egrave;re de protection des renseignements personnels</h3>
<p>Si on regarde les <a href="https://www.loblaw.ca/fr/full-loblaw-privacy-policy/" rel="noopener noreferrer" target="_blank">politiques </a>de Loblaw en mati&egrave;re de respect de la vie priv&eacute;e, on peut lire au point 7.0&nbsp;:</p>
<blockquote><p>&ldquo;<em>nous nous engageons &agrave; prot&eacute;ger votre vie priv&eacute;e en utilisant un regroupement de mesures [&hellip;] l&rsquo;authentification multifactorielle, le masquage, le chiffrement, la journalisation et la surveillance.</em>&rdquo;.</p></blockquote>
<p>Si on va dans l&rsquo;onglet &ldquo;Plus de d&eacute;tails&rdquo;, il est d&eacute;crit les diff&eacute;rents types de processus utilis&eacute;s comme le masquage, la d&eacute;personnalisation et l&rsquo;anonymisation.</p>
<p>Tous ces termes cr&eacute;ent une certaine <strong>confusion </strong>dans les processus utilis&eacute;s concr&egrave;tement par Loblaw. En effet, les termes &ldquo;d&eacute;personnalisation&rdquo; et &ldquo;anonymisation&rdquo; ne signifient pas la m&ecirc;me chose. Je vous renvoie au <a href="https://www.blogueducrl.com/2023/11/chronique-du-cti-pseudonymisation-anonymisation-depersonnalisation/" rel="noopener noreferrer" target="_blank">blogue de Erwan Jonch&egrave;res et Samy Si Chaib </a>pour une explication compl&egrave;te de la diff&eacute;rence entre les deux termes.</p>
<p>Aussi, le terme &ldquo;masquage&rdquo;, d&eacute;fini par l&rsquo;entreprise comme consistant&nbsp; &ldquo;<em>&agrave; cacher vos renseignements de mani&egrave;re &agrave; ce que la structure demeure la m&ecirc;me, mais que le contenu ne soit plus identifiable</em>&rdquo;, ne correspond pas enti&egrave;rement aux conclusions du Commissariat.</p>
<p>Toujours au point 7.0 D, concernant la dur&eacute;e de la conservation des renseignements on peut lire&nbsp;:</p>
<blockquote><p>&ldquo;<em>Nous conserverons vos renseignements personnels aussi longtemps que n&eacute;cessaire pour remplir les objectifs pour lesquels ils ont &eacute;t&eacute; collect&eacute;s [&hellip;].</em>&rdquo;</p></blockquote>
<p>Ce qui respecte le cadre de la LPRPDE.</p>
<p>Toutefois, on peut lire ensuite&nbsp;: &ldquo;<em>Une fois que vos renseignements personnels ne seront plus requis, ils seront d&eacute;truits ou anonymis&eacute;s (de sorte que les renseignements ne vous identifient plus)</em>.&rdquo; De nouveau, on retrouve le terme &ldquo;anonymiser&rdquo; qui n&rsquo;est pas le m&ecirc;me processus que la d&eacute;personnalisation, on en revient &agrave; la m&ecirc;me confusion.</p>
<p>Aussi, je vous renvoie un <a href="https://www.fasken.com/fr/knowledge/2022/11/24-anonymization-and-de-identification-under-bill-c-27" rel="noopener noreferrer" target="_blank">blogue de Ma&icirc;tre Daniel Fabiano</a> du cabinet FASKEN qui explique en d&eacute;tail les enjeux de ces termes dans le cadre du projet de loi C-27 (qui a &eacute;t&eacute; abandonn&eacute; mais qui offre une bonne piste de r&eacute;flexion).</p>
<p>A pr&eacute;ciser que cette politique a &eacute;t&eacute; mise &agrave; jour le 6 octobre 2025, et fera peut-&ecirc;tre l&rsquo;objet de modifications au regard des conclusions qui ont &eacute;t&eacute; d&eacute;pos&eacute;es par le Commissariat.</p>
<h2>3. Les conclusions finales et recommandations du CPVP</h2>
<h3>a) Une am&eacute;lioration de l&rsquo;organisme de traitement des plaintes</h3>
<p>Durant l&rsquo;enqu&ecirc;te du CPVP, Loblaw a pris des mesures afin de corriger les faiblesses de son syst&egrave;me interne de traitement de plaintes. Comme par exemple une formation suppl&eacute;mentaire pour le personnel ou encore la correction de probl&egrave;mes techniques. Les demandes de suppression des comptes PC Optium ont notamment &eacute;t&eacute; r&eacute;gl&eacute;es.</p>
<p>A l&rsquo;&eacute;gard de ces am&eacute;liorations, le Commissariat &agrave; la protection de la vie priv&eacute;e a consid&eacute;r&eacute; que cet &eacute;l&eacute;ment de <strong>la plainte a &eacute;t&eacute; r&eacute;solu. </strong></p>
<p>On peut alors se demander si ces changements vont suffire et si ces efforts vont se poursuivre. En effet, un tel traitement des plaintes demande d&rsquo;&ecirc;tre r&eacute;vis&eacute; en continu afin de rester en conformit&eacute; avec la loi f&eacute;d&eacute;rale.</p>
<h3>b) L&rsquo;appel &agrave; un tiers neutre concernant l&rsquo;effectivit&eacute; de la d&eacute;personnalisation</h3>
<p>Le Commissariat f&eacute;d&eacute;ral n&rsquo;a pas re&ccedil;u assez de renseignements par Loblaw pour constater qu&rsquo;elle d&eacute;personnalisait suffisamment les renseignements des anciens d&eacute;tenteurs d&rsquo;un compte PC Optium pour &ecirc;tre conforme &agrave; la loi f&eacute;d&eacute;rale. L&rsquo;organisme recommande donc &agrave; la compagnie de faire appel &agrave; un <strong>tiers ind&eacute;pendant</strong> pour v&eacute;rifier que son processus de d&eacute;personnalisation est effectif.</p>
<p>Malgr&eacute; le fait que l&rsquo;entreprise a affirm&eacute; ne pas &ecirc;tre en accord avec les conclusions f&eacute;d&eacute;rales, elle a accept&eacute; de se soumettre &agrave; un tiers pour ensuite fournir un rapport au Commissariat.</p>
<p>Cet &eacute;l&eacute;ment de la plainte a &eacute;t&eacute; consid&eacute;r&eacute; comme <strong>conditionnellement r&eacute;solu</strong> au fait que le rapport soit conforme aux attentes l&eacute;gislatives en mati&egrave;re de protection des donn&eacute;es personnelles.</p>
<p>Apr&egrave;s 2 ann&eacute;es d&rsquo;enqu&ecirc;te, les conclusions sont assez faibles d&rsquo;effets sur les compagnies Loblaw. Ce manque d&rsquo;effectivit&eacute; montre les limites du pouvoir du Commissariat f&eacute;d&eacute;ral, qui tente de compenser cette difficult&eacute; par un<strong> accompagnement sur un plus long terme avec les entreprises. </strong></p>
<p>&nbsp;</p>]]></content>
	<updated>2026-03-27T14:58:13+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-27T14:58:13+00:00</updated>
		<title>Vincent Gautrais</title></source>

	<category term="cours"/>

	<category term="mes étudiant-e-s"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-26:/283733</id>
	<link href="https://www.gautrais.com/conferences/6155/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=6155" rel="alternate" type="text/html"/>
	<title type="html">Droit décolonial + numérique, Salon François Chevrette + Zoom(26 mars 2026)</title>
	<summary type="html"><![CDATA[<p>Venez &eacute;couter le professeur Ralph Michael (Director at the Max Planck Institute for Comparative and ...</p>]]></summary>
	<content type="html"><![CDATA[<div dir="auto">Venez &eacute;couter le professeur <strong>Ralph Michael</strong> (<span>Director at the Max Planck Institute for Comparative and International Private Law</span>) accompagn&eacute; du professeur <span><strong>Toussaint Nothias</strong>&nbsp;</span>(<span>New York University</span>)&nbsp; qui partageront leurs r&eacute;flexions sur ce sujet&nbsp;!</div>
<div dir="auto"></div>
<div dir="auto">&#128205; En personne &agrave; l&rsquo;Universit&eacute; de Montr&eacute;al (A-3464, Salon Fran&ccedil;ois-Chevrette, Facult&eacute; de droit, UdeM);</div>
<div dir="auto">&#128250; Diffusion en direct sur Zoom;</div>
<div dir="auto">&#128351; 11h00 | 1 heure 30 de formation continue reconnue</div>
<div dir="auto"></div>
<div dir="auto">&#128073; Inscription gratuite ici&nbsp;:&nbsp;<strong><a href="https://fcdroit.umontreal.ca/Web/MyCatalog/ViewP?pid=OPWhgFdTt9fynJhm%2fIXQ4A%3d%3d&amp;id=gl6VQTeXRz5k3O0Fdm2%2fcw%3d%3d&amp;cvState=cvDate=26-03-2026" target="_blank" rel="noopener noreferrer">FcDroit</a></strong></div>]]></content>
	<updated>2026-03-26T14:08:14+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-26T14:08:14+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-25:/283661</id>
	<link href="https://law.stanford.edu/2026/03/19/email-based-ai-agents-for-law-firms-mixus-stanford-codex-group-meeting-3-19-2026/" rel="alternate" type="text/html"/>
	<title type="html">Email-Based AI Agents for Law Firms – Mixus  | Stanford CodeX Group Meeting 3.19.2026</title>
	<summary type="html"><![CDATA[<p>Elliot Katz, co-founder and CEO of Mixus, presented to the Stanford CodeX group about his company...</p>]]></summary>
	<content type="html"><![CDATA[<p>Elliot Katz, co-founder and CEO of Mixus, presented to the Stanford CodeX group about his company&rsquo;s email-based AI agents designed for law firms. Drawing on his background as an attorney and his prior startup Phantom Auto (which kept humans in the loop for autonomous vehicles), Katz built Mixus around the same principle: AI needs human oversight for high-stakes work. Mixus agents work entirely through email &mdash; attorneys simply email tasks in plain language and receive completed work product like redlines, issues lists, and cap tables in return &mdash; eliminating the change management burden that has slowed AI adoption in legal.</p>
<p>The platform includes firm-level approval workflows, automatic playbook generation from past documents, and deterministic gates that prevent outputs from advancing without human sign-off. The discussion touched on concerns around rubber-stamping, attorney-client privilege, and data security, with Mixus addressing those through SOC 2 compliance, zero data retention agreements with their model provider (Anthropic&rsquo;s Claude), and an auditable email trail of who reviewed and approved each output.</p>
<p><img fetchpriority="high" decoding="async" src="https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026.png" alt="Email-Based AI Agents for Law Firms: Mixus CEO on Human-in-the-Loop Legal AI | Stanford CodeX Group Meeting 3.19.2026" srcset="https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026.png 886w,https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026-300x163.png 300w,https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026-768x417.png 768w,https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026-147x80.png 147w,https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026-220x119.png 220w,https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026.png 886w,https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026-300x163.png 300w,https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026-768x417.png 768w,https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026-147x80.png 147w,https://law.stanford.edu/wp-content/uploads/2026/03/email-based-ai-agents-for-law-firms-mixus-ceo-on-human-in-the-loop-legal-ai-stanford-codex-group-meeting-3-19-2026-220x119.png 220w" sizes="(max-width: 886px) 100vw, 886px" referrerpolicy="no-referrer" loading="lazy"></p>
<p><a href="https://youtu.be/_CKXvSSCBSs?si=VAMu-rIk1oRuDZk4" rel="noopener noreferrer" target="_blank">Watch Mixus Codex Group Meeting on Youtube</a></p>
<p><span>Roland Vogl: Welcome everyone to our Codex group meeting. It is March 19th, 2026. I was just telling Elliot, our guest here, and my colleague Elaine, that we&rsquo;re in the midst of a hot phase of preparations for our FutureLaw week.</span></p>
<p><span>So if you haven&rsquo;t registered yet, you should do so. It&rsquo;s going to be an amazing event, and just an amazing group of people who have already announced their participation. So don&rsquo;t miss it. Join us for that. And today we have, as I said, Elliot Katz here. He&rsquo;s co-founder and CEO of Mixus, which is bringing agentic AI into law firms and doing so in a safe manner. And so we&rsquo;re really thrilled to have you here with us today, Elliot, and very excited to learn about what you&rsquo;ve been up to. So I&rsquo;ll turn it over to you.</span></p>
<p><span>Elliot Katz: Great, great. Thanks, Roland. Thanks so much for inviting me. I&rsquo;m honored to be speaking to everyone here today at Stanford CodeX. As Roland mentioned, I&rsquo;m the co-founder of Mixus. What we do is we provide email-based AI agents with built-in firm-level oversight to legal teams, including to multiple AmLaw 20 firms. As to my background, I&rsquo;m what I like to call a recovering attorney.</span></p>
<p><span>So I did go to Cornell Law School, and from there I went to DLA Piper, where I led their autonomous vehicle practice. And then as a sixth year, I moved to McGuireWoods as a partner and global chair of their autonomous vehicle practice. And through my experience working with my autonomous vehicle company clients and getting to ride in their vehicles, I really concluded that autonomous vehicles could not be commercially deployed at scale without some way of keeping a human in the loop when the vehicles needed assistance.</span></p>
<p><span>Then fast forward to 2017. I met my now co-founder, Shai, who you can see here, when he gave me a teleoperated ride around the block in a vehicle he was remotely driving from his living room in Palo Alto. So at that point, I left the practice of law and we started Phantom Auto, where our technology enabled humans sitting literally thousands of miles away to remotely assist or operate unmanned vehicles when they ran into issues that autonomy could not handle.</span></p>
<p><span>Now fast forward to 2024. Shai and I co-founded Mixus together with essentially the same premise, right? Which is AI can do a lot, but if the stakes are high and the work is truly of consequence, you absolutely need a human in the loop. I mean, even if AI can get you 80 to 90% of the way there, you still need humans for that 10 to 20% when the circumstances mandate that everything&mdash;and I mean everything&mdash;must be done correctly.</span></p>
<p><span>So that&rsquo;s who we are. Our DNA is human-in-the-loop, and we&rsquo;re now applying that DNA to the legal sector with Mixus agents, which again combine artificial and human intelligence to provide the level of work product that this sector requires. So the first thing I&rsquo;ll tell you, and this is based on my experience as an actual practitioner and from deploying Mixus agents to some of the top law firms in the world, is that your jobs as attorneys are safe.</span></p>
<p><span>I could probably find multiple LinkedIn posts in my feed right now that say, you know, lawyers and law firms won&rsquo;t exist come 2027 because you&rsquo;ll just talk to a chatbot. But we really believe that that is nonsense. Because for AI to be deployed at scale in the legal sector, you have to mix us&mdash;right, that&rsquo;s the name of the company&mdash;artificial intelligence and human intelligence together. Right? Because the correct answer to a contract negotiation question is not a matter of factual accuracy, right? It depends on judgment and the client&rsquo;s risk tolerance, the deal type, the counterparty&rsquo;s position, and a multitude of other factors that simply don&rsquo;t exist in the public domain. So Mixus exists not to displace attorneys but to greatly augment their brilliant legal minds.</span></p>
<p><span>So let&rsquo;s dive in. So if we can go to the next slide. Why can&rsquo;t attorneys just use fully autonomous agents for their work? First, because they&rsquo;re probabilistic, right? And no client has ever hired an attorney for them to guess the next most probable word needed in a purchase agreement in a massive M&amp;A deal, right? Clients hire attorneys for laser precision. They&rsquo;re paying them hundreds of thousands, millions of dollars for laser precision. And that&rsquo;s simply not what probabilistic AI provides.</span></p>
<p><span>Okay, number two: it&rsquo;s not enough for an agent to have end-of-one oversight, right? You need oversight at the firm level so that attorneys with different areas of expertise can review and approve when appropriate and when needed, right?</span></p>
<p><span>And third, and this cannot be overlooked, the AI tools that exist today are standalone tools, right? Attorneys need to learn a new tool and integrate it into their workflow. And that level of change management has already proven very difficult for the legal sector, right? AI has taken off for coding, for example. And part of that is because coders are highly technical, right? For lawyers, as I&rsquo;m sure many of you in the audience can appreciate, that&rsquo;s not always the case, right? As a first-year attorney, I remember working with one partner&mdash;he was a brilliant attorney, but to this day I&rsquo;m still not sure that he knew how to turn on a laptop, right? So asking someone like that to learn a new tool, learn a new UI, integrate it into their workflow&mdash;very, very difficult to do.</span></p>
<p><span>But with Mixus agents, we set out from day zero to solve all of these issues, right? Number one: we meet attorneys where they are most of their day, which is their email inbox. To use our agents&mdash;and I&rsquo;ll show you guys this in a second&mdash;you just email </span>agent@mixus.com<span> a task in natural language, the same way you would talk to an associate or a partner, right? And you can cc any of your colleagues. The agent then emails back the completed work product, and the attorneys review and approve the agent outputs.</span></p>
<p><span>So we are mimicking exactly how lawyers already work today&mdash;exactly what they&rsquo;ve now been doing for decades since email came out in the mid-&rsquo;90s&mdash;collaboratively and in natural language over email, so that we fit exactly into their workflow with no change management at all. And because we enable that firm-level attorney oversight, legal teams get the efficiency of AI agents without the risk of incorrect AI outputs making their way to their clients or to, you know, a legal brief or anything that they&rsquo;re submitting to the court. Obviously, we&rsquo;ve all seen public examples of when that&rsquo;s gone horribly wrong.</span></p>
<p><span>So now let me show you a quick demo of a few of our agents. And for the demo, I&rsquo;ll show you some of our venture financing agents, as those are near and dear to my heart as a startup founder. So let&rsquo;s start with our term sheet agent. And let me first set the scene, right? So let&rsquo;s say that Mixus gets a term sheet today from XYZ VC firm. The first thing I do as the founder of Mixus is I forward that term sheet to my VC partner, and then they probably, in all likelihood, send it to one of their associates to do a redline and an issues list, which is exactly what I want to see. That&rsquo;s the work product I need. And then the partner reviews, and I get it back a few days later. Also, maybe there&rsquo;s tax implications or stuff like that&mdash;they bring in a tax partner or whatever it is. But the whole process, soup to nuts, is a few days here.</span></p>
<p><span>If you&rsquo;re looking at the screen right now, in this example, Christian is emailing the Mixus agent the term sheet that he received, right? And the Mixus agent&mdash;the email address, you don&rsquo;t see it in this format, but it&rsquo;s </span>agent@mixus.com<span>&mdash;and he&rsquo;s emailing the term sheet. So Christian here is playing like the partner at the law firm, and he&rsquo;s also cc&rsquo;ing some of his associates. And he&rsquo;s saying redline the term sheet. So if you go down a few minutes later, he gets back an email that has everything that he would need that I, as a founder, want back from my attorneys. It has the redline, and it also has the issues list.</span></p>
<p><span>And you could, Christian, if you want to open up the redline, just to show everyone what that looks like quickly. Okay, great. Looks like a normal redline. And then go back to the email. And so you also do have the ability to click on the link here and go work directly in our web UI. What we&rsquo;ve found with our deployments thus far: attorneys really want to stay in email. So most of them do everything that they do over email, which is entirely possible. But if you&rsquo;d like a web UI, you can do that as well.</span></p>
<p><span>So go back to the email chain, Christian. So then if you go down, it&rsquo;s telling you everything that it did. It attached the documents. But then one of the associates on the chain says, &ldquo;Agent, please reduce the no-shop period from 60 days to 30 days.&rdquo; So then if you go down, Christian, here it&rsquo;s made that change. It&rsquo;s attached all the new documents. And I don&rsquo;t know if there&rsquo;s anything more after that. If you can keep going down, Christian. Yeah, maybe you can show the issues. Yeah. So here&rsquo;s the issues list that it produced. So you&rsquo;re getting everything that you need, and you&rsquo;re doing it exactly the way that firms are doing it today: collaboratively over email. Anyone can interact with the agent. You saw associates and partners interacting together and with the agent. And it&rsquo;s all in natural language. So there&rsquo;s no learning curve, right? You don&rsquo;t have to understand how to do any of this.</span></p>
<p><span>The second thing that I&rsquo;ll show is after you get the term sheet, we need a pro forma cap table. So Christian, if you could go to&mdash;yeah. So here he&rsquo;s just saying, &ldquo;Agent, create a new pro forma cap table based on the preexisting cap table that he&rsquo;s attaching and the Series A term sheet.&rdquo; And if you can go down, Christian, a few minutes later it&rsquo;s going to provide that pro forma. You can click on the link just to quickly show everyone what that looks like. Looks very nice. It&rsquo;s got the waterfall analysis, etc., which I like to look at, if you go to the left&mdash;stuff like that. So you can go back to the email and keep going down.</span></p>
<p><span>So he had&mdash;oh, I guess that&rsquo;s it for the pro forma. The last one that I&rsquo;ll show you guys is the M&amp;A docs. That&rsquo;s what you need to do after the pro forma. You already&mdash;let&rsquo;s say this company already raised a seed round, and now they&rsquo;re raising their A. So all the attorney has to do is attach the Series Seed docs and then ask for them to be updated based on the new term sheet. And that&rsquo;s exactly what you&rsquo;re going to get here.</span></p>
<p><span>And then if you can keep going down, Christian. He did cc one of his associates. So one of the associates chimes in and says, &ldquo;Hey, I looked at everything, and everything looks good.&rdquo;</span></p>
<p><span>So that is a very high-level, quick overview of Mixus. We&rsquo;re going to take some questions in a second here. But if anyone listening is interested in learning more or trialing our agents, just reach out to me: </span><b>elliot@mixus.ai</b><span>. And because our agents are email-based, there&rsquo;s no complex onboarding or installation required. We can set you up in minutes. And because we&rsquo;re mimicking exactly how firms are doing this today, you know, we don&rsquo;t need any elongated onboarding or anything like that. Attorneys just know how to use it pretty much instantly.</span></p>
<p><span>And last thing I&rsquo;ll say is if you&rsquo;re in the audience right now thinking, you know, &ldquo;Geez, we brought in XYZ AI tool into our org or into our firm, but our attorneys aren&rsquo;t really utilizing that tool,&rdquo; we could be a perfect fit for you. Because our current customers all had or have licenses for other tools as well. But when everything comes down to usability, right&mdash;what AI tools will attorneys actually use day to day and integrate into their core workflow?&mdash;the firms that we&rsquo;re working with today have found that our approach is really unparalleled in the market on that specific front.</span></p>
<p><span>So with that, let me know what questions we can answer, and we&rsquo;ll move from there.</span></p>
<p><span>Roland Vogl:</span><span> Yeah. So there&rsquo;s a couple of questions coming in the chat, but I have, before we go to those, a couple of questions. So one is: how much setup time is involved for each firm? Presumably, you know, when you do those automatic&mdash;when your agents do those redlines&mdash;you know, they must be trained to know, you know, whatever&mdash;you know, how, what&rsquo;s the, you know, the market for this or that, right? And so that must be based on the knowledge of the firm, right, or the human lawyers of the firm. How do you handle this process? And&mdash;great question&mdash;and how do you have&mdash;you talk about agents, you know, that&rsquo;s like there&rsquo;s one email for agents, right? But do you have agents for different verticals, you know, there&rsquo;s like, yeah, VC practice and whatever environmental compliance practice&mdash;at least separate agents versus all like&mdash;</span></p>
<p><span>Elliot Katz:</span><span> Yeah. Great question. So there&rsquo;s really two questions in there. As to the first: many of our agents do not require a playbook. But some of our agents either, you know, do require a playbook, or the outputs that you&rsquo;ll receive from the agent will be more tailored to your preferences if you do have a playbook.</span></p>
<p><span>Now, what we consistently heard from our customers, especially early on, is, &ldquo;Listen, even if we have to make an upfront investment of time of a couple of hours of developing our own playbooks, the juice is potentially so worth the squeeze, because then we can use the agents moving forward.&rdquo; And it&rsquo;s not just a one-time, essentially, cost on our time.</span></p>
<p><span>But what we created was an automatic playbook builder. So now all you have to do to create a playbook is email in exemplars. Let&rsquo;s say it was the first agent that I showed, the term sheet agent, right? You email in&mdash;attach a few exemplars of term sheets that you&rsquo;ve done in the past or that you&rsquo;ve redlined, and the system will ingest that. It will automatically create the playbook for you so that you have the foundation. And then you can just go in and make any edits that you want to fit your specific preferences.</span></p>
<p><span>And I think that&rsquo;s what we&rsquo;re showing right now on the screen, is the ability to make those playbooks. And after you make the playbook, the playbooks can also automatically update based on your preferences. So as you go through and do more work with the system, it understands your preferences and things that you changed along the way, and it will check in with you and say, &ldquo;Hey, is this something that&rsquo;s a one-off or a standard that you&rsquo;d like to apply to the playbook generally?&rdquo;</span></p>
<p><span>So that&rsquo;s how we handle that piece. As to your second question, Roland, which was about which agents do we have deployed&mdash;so we have probably deployed like 50 agents at this point, both purely legal agents and also other agents that are not necessarily purely legal, right? For one firm that we&rsquo;re working with, we are deploying&mdash;we&rsquo;ve deployed a task management agent that basically serves as a project manager across all of your matters. It&rsquo;s entirely over email. You can talk to it like a human. So it&rsquo;s an agent that keeps the train on the tracks when associates are working with six different partners and five different matters for each. It can coordinate amongst those groups seamlessly, 100% over email. So again, no tool switching, no change management.</span></p>
<p><span>But we deploy agents that are common in each practice. And then another thing that we do with our customers is we will build and deploy custom-built agents. So not only will we optimize current agents to tailor them to fit their practices specifically, but if they have a new workflow where they would find a lot of value because their firm does a lot of XYZ work, we will create those agents for them as well.</span></p>
<p><span>Roland:</span><span> Got it. So Benjamin raises a good question, too, which is, you know, going to the point that, you know, we need human oversight, but how do we make sure that humans are not just rubber-stamping the AI outputs, right? Like, how hard is it to actually, you know, really go into the outputs of the AI and review, you know, the accuracy of the output? And so we&rsquo;re not sort of like in the ballpark, &ldquo;Okay, let it just go out like that.&rdquo; So what&rsquo;s&mdash;what level of&mdash;what is oversight mean? And how do we make sure that it&rsquo;s not just people rubber-stamping the AI?</span></p>
<p><span>Elliot:</span><span> Yeah, absolutely. Great question. So first of all, to kind of the middle part or second part of your question: very easy to review the outputs, right? These are attorneys where it&rsquo;s their subject matter expertise, right? So you&rsquo;re going in, you&rsquo;re reviewing a redline, you&rsquo;re reviewing a new document that the agent put forth. You have all the facts, you have everything in one chain if you&rsquo;re on email, or in the chat if you&rsquo;re on the web UI.</span></p>
<p><span>As to the second point, there is no kind of blind rubber-stamping here, because at the end of the day, you do have a human who is on record of being responsible for checking this, right? In the same way that my VC partner would send an issues list to an associate today and say, &ldquo;Review this and make sure everything&rsquo;s accurate and all that,&rdquo; that&rsquo;s what&rsquo;s happening when a human reviewer is signing off here as well. And there is a record of who verified, right?</span></p>
<p><span>So some of the firms that we&rsquo;re working with&mdash;they&rsquo;ve created rules, right, where AI outputs cannot go out in work product to a client before at least one partner signs off, or whatever the rule may be. And you have a record, an email, of someone saying, &ldquo;I verified that this looks good, and we can proceed.&rdquo; So it&rsquo;s the same kind of social pressure, for lack of a better term, as to why you would get the same outcome that you would get today.</span></p>
<p><span>Roland:</span><span> So like&mdash;that&rsquo;s good. Yeah. So Jason has a question. Sorry, Benjamin, did you want to add something on that?</span></p>
<p><span>Benjamin:</span><span> Yeah, I&rsquo;d like to raise&mdash;like, there have been federal judges that have had their interns or their clerks, you know, do things, and they just rubber-stamp it. And even federal judges who&rsquo;ve had AI stuff that has been rubber-stamped. And while they didn&rsquo;t literally put their signature on it, I think the meaningfulness of a review is to verify that the person who actually is reviewing it understands what is going on in some sort of interactive way. And I know that you have a limited managed work budget. And when you get too much work, you just sort of rubber-stamp things. And so how do you sort of force them to slow down and put like a roadblock to make sure that they tell the system that they understand why?</span></p>
<p><span>Elliot:</span><span> Yeah. I mean, I think I would answer just similarly to kind of what I said before, in the sense that no different than if you give an associate something to review before it goes out to a client today&mdash;they know that they&rsquo;re kind of, their butt&rsquo;s on the line, for lack of a better term. That&rsquo;s similar to the way our system works, right? There&rsquo;s still the person who is the front line making sure that everything is in place before it goes over to the client, and all that is auditable. There&rsquo;s a record within email or using the chat as to who, you know, was doing those checks.</span></p>
<p><span>Roland:</span><span> Yeah. I guess it&rsquo;s also a, you know, a question of like, you know, continuing to sort of instill a sensitivity in people who use AI in professional services and elsewhere, you know, about, you know, that it&rsquo;s not, you know, it&rsquo;s not perfect and it may hallucinate and so on. And then, you know, and understanding that then, you know, their reputation is on the line if they don&rsquo;t, you know, provide meaningful review. And so, yeah, I think it&rsquo;s a little unclear now, but I think it will sort of become clearer in the future as to what level of control different humans will be able to, you know, display over AI. But yeah, it&rsquo;s a really good question, Benjamin. And Jason had a question on&mdash;I could just ask about client privilege.</span></p>
<p><span>Jason:</span><span> Yeah, yeah. So obviously attorneys are using Harvey AI and Legora and, you know, Westlaw and all the other stuff. But, you know, in practice, what are you hearing as far as any pushback of using an LLM on the backend? It&rsquo;s putting, you know, client data into the LLM. And yes, I&rsquo;m sure that the APIs have, you know, good terms of service, but still you&rsquo;re getting&mdash;at this point, what kind of concerns are attorneys or law firms at the corporate level saying about attorney-client privilege? Because like the New York Times versus OpenAI case back in last May has still not been, you know, fleshed out where it&rsquo;s going to land. And people are kind of wondering about that.</span></p>
<p><span>Elliot:</span><span> Yeah, yeah. So I mean, first of all, on the security side, especially for the customers that we work with, we go through, you know, very lengthy security reviews. We have all the things that these big law firms would expect, right? We&rsquo;re SOC 2, we have all the ISOs that we need in place, etc. Also, with our model provider, we have a zero data retention agreement in place. So I think we&rsquo;re buttoned up on that side in the eyes of our customers.</span></p>
<p><span>Going to your question about privilege, you know, my opinion&mdash;and this is, you know, based on many conversations that I&rsquo;ve had with our customers&mdash;is that this is basically settled law, right? In the sense that law firms have been using vendors for years, right, that do document review and other things. And those are considered part of the privilege. So, you know, we haven&rsquo;t run into any issues there yet. But we&rsquo;d love to hear if you have, you know, kind of a different tack or different thoughts on the subject.</span></p>
<p><span>Jason:</span><span> Well, you know, it&rsquo;s perception, you know, on this matter, right? And there are some, you know, folks that I&rsquo;ve worked with in some law firms that feel&mdash;but it&rsquo;s perception. And you have to make that case and say, &ldquo;Well, everybody else is doing that.&rdquo; And you&rsquo;re like, &ldquo;Well, we&rsquo;re not everybody else,&rdquo; right? So I was just curious what you&rsquo;ve seen, you know, in the trenches as you work with some of them, because some of them can be very, very conservative about that point.</span></p>
<p><span>Elliot:</span><span> Oh yeah, yeah. No, no, for sure. Like, email has been established in the industry very clearly. And if you have a cloud provider, you know, that&rsquo;s a branded cloud&mdash;like, you&rsquo;ve got your decades there. But LLMs in particular have, you know, some aspects to them as far as, you know, bioterrorism and other things that they&rsquo;ve got people watching, a sampling of these things, and that&rsquo;s throwing up other questions anyway we can offer&mdash;</span></p>
<p><span>Jason:</span><span> No, no, I think that, yeah, I think it&rsquo;s&mdash;listen, this is a very important topic. And to your point about, you know, cloud providers and all that&mdash;I mean, we have talked with major law firms that have not migrated to the cloud, right? They are still completely on-prem, right? So these are very conservative, you know, security-first organizations. And we molded our company around that expectation. I mean, I came from this world. So I mean, that&rsquo;s probably the thing that we&rsquo;ve invested from a time perspective, you know, and dollar-wise, just a huge amount of time and money into security.</span></p>
<p><span>Jason:</span><span> Yeah, it&rsquo;s&mdash;oh, you&rsquo;ve done a nice job. It looks really good.</span></p>
<p><span>Elliot:</span><span> Thank you. Thank you, I appreciate it.</span></p>
<p><span>Roland:</span><span> Yeah. There&rsquo;s a couple more questions in the chat. I&rsquo;m not sure we can get to all of this. I know Dasa has mentioned a little bit of his work with the agencies he&rsquo;s been creating. So, Dasa, you want to elaborate?</span></p>
<p><span>Dasa:</span><span> Yeah, sure. Thank you, Roland. And yeah, I agree&mdash;great presentation. I was just saying I&rsquo;ve been spending a lot of work with clients lately developing agents to do reviews and red-team before, basically, like the attorney or the business person even sees the draft. Do you have flows that basically include some sort of review or red-team-y loop for that type of revision before, you know, basically as a gate before it gets to a next step in a process?</span></p>
<p><span>Elliot:</span><span> Great question. Christian, do you want to chime in on this one? I know it&rsquo;s a topic near and dear to your heart.</span></p>
<p><span>Christian:</span><span> Yeah. So we do have a way that you can define different steps in a process, so you can decide, you know, very specific processes that you have. I can show you one example of that actually over here. So if you&rsquo;re following one specific process very regularly, one thing you can do is say, &ldquo;Save this workflow as an agent.&rdquo; And then whenever a new email comes in, you can have that specific agent run. So you can also just create these custom workflows just by talking to the agent. So yeah, that&rsquo;s possible as well.</span></p>
<p><span>Dasa:</span><span> Okay, that&rsquo;s great. Thanks.</span></p>
<p><span>Roland:</span><span> And look, Roland, I&rsquo;m wearing my CodeX hat, getting ready for FutureLaw.</span></p>
<p><span>Elliot:</span><span> Oh, I appreciate it, yes.</span></p>
<p><span>Roland:</span><span> Yeah, getting into&mdash;you&rsquo;re getting into the spirit. I love it.</span></p>
<p><span>Elliot:</span><span> Right, super.</span></p>
<p><span>Roland:</span><span> Okay, so Matthew, yes, one comment&mdash;thinks that a tool like yours would free up a lot of time since it&rsquo;s doing a huge chunk of the first round of work. Yeah. This just goes to the concern around, &ldquo;Well, is somebody just going to rubber-stamp it?&rdquo; I think we&rsquo;re going to have way more time than we ever had ever before when these AI tools are doing a huge amount of the work.</span></p>
<p><span>Elliot:</span><span> Yeah. I couldn&rsquo;t agree more with that statement. We&rsquo;re already seeing it with our customers. You know, some of the feedback that we&rsquo;re getting is, &ldquo;I&rsquo;m as busy as, you know, as I was before we were using the tool. I&rsquo;m just doing a lot more work a lot more efficiently for a lot more clients,&rdquo; right? But I think you are going to see that this role of essentially managing agent outputs, verifying the agent outputs, is going to be&mdash;not just in the legal sector, but more broadly&mdash;a big part of how work gets done moving forward.</span></p>
<p><span>Roland:</span><span> Okay. And then Mavi asks a question quickly on the sort of backend security of the LLM model. Yeah, it&rsquo;s the sort of multi-modal architecture&mdash;is the sort of oversight really mainly carried out by the humans in the loop?</span></p>
<p><span>Elliot:</span><span> So Christian, you want to jump in on that one too?</span></p>
<p><span>Christian:</span><span> We mainly use Claude. And, you know, we have a zero data retention policy with them, as Elliot mentioned. So that&rsquo;s the sort of base model that we use. So we have different ones that you can choose from. So we just keep sort of following up on, like, okay, whenever there&rsquo;s a new model evolution, right&mdash;so right now it&rsquo;s Opus, that&rsquo;s the latest one. That&rsquo;s the one we&rsquo;re using. So we always use the latest and greatest model from Claude, basically. So I mean, that&rsquo;s in terms of the underlying model. So I just want to understand the question on the human side&mdash;like, what was that exactly?</span></p>
<p><span>Question:</span><span> Is there a sovereign layer to review output from a software side, or is it only human-in-the-loop, basically, right?</span></p>
<p><span>Christian:</span><span> Yeah. So what I could say there, like, we have deterministic gates. So until someone approves one step, it&rsquo;s not going to continue on to the next step. And that&rsquo;s a deterministic thing you can configure. So you can also do that within the UI, actually. So if you go over here and you want to create a new agent, that is possible. Then you can create that here, and then you can see you can define these different steps. All of this is possible via email as well. So you can just tell the agent to create these different steps for a new agent or workflow that you have. And then you can say, &ldquo;Require verification.&rdquo; And then you can define the users that you want to verify. So in this step, you could say, &ldquo;Okay, I want this, you know, user to verify step one before it continues on to step two.&rdquo; And that&rsquo;s going to be a deterministic approval that has to take place before it goes on to the next step, right? So that&rsquo;s, you know, similar to what was mentioned before on these custom workflows. That&rsquo;s how we can support that as well.</span></p>
<p><span>Roland:</span><span> Right. Well, we&rsquo;re already a little bit over time. So we have to close here, unfortunately. But what would be a good way for folks to reach out with any follow-up questions? Do you think you could put your email address into the chat, perhaps?</span></p>
<p><span>Elliot:</span><span> Yeah, absolutely. It would be great. And so the easiest way is just email, or you can find me on LinkedIn as well. And look forward to chatting with anyone that wants to learn more.</span></p>
<p><span>Roland:</span><span> Super. Love it. Thank you so much, Elliot and Christian. It&rsquo;s been a great presentation. It&rsquo;s very cool. It&rsquo;s like, you know, kind of like I feel like I&rsquo;ve seen the future. And so, so amazing. Thank you for sharing with this group here. And yeah, we look forward to tracking your progress.</span></p>
<p><span>Elliot:</span><span> Okay. Awesome. Thank you guys so much. Really appreciate it, Roland.</span></p>]]></content>
	<updated>2026-03-19T22:08:49+00:00</updated>
	<author><name>CodeX</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-19T22:08:49+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="codex"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-24:/283514</id>
	<link href="https://www.gautrais.com/conferences/scribo-ergo-sum-la-chaire-l-r-wilson-organise-un-colloque-etudiant/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=scribo-ergo-sum-la-chaire-l-r-wilson-organise-un-colloque-etudiant" rel="alternate" type="text/html"/>
	<title type="html">Scribo, Ergo Sum&amp;#160;: La Chaire L.R. Wilson organise un colloque étudiant&amp;#160;!, Salon François Chevrette(24 mars 2026)</title>
	<summary type="html"><![CDATA[<p>Dans le cadre du Mois de la recherche &eacute;tudiante organis&eacute; par la&nbsp;Facult&eacute; de droit de l&rsquo;Universit&eacute; de ...</p>]]></summary>
	<content type="html"><![CDATA[<p>Dans le cadre du Mois de la recherche &eacute;tudiante organis&eacute; par la&nbsp;<a tabindex="0" href="https://www.linkedin.com/company/droitumontreal/" target="_blank" rel="noopener noreferrer">Facult&eacute; de droit de l&rsquo;Universit&eacute; de Montr&eacute;al</a>, la Chaire L.R. Wilson a le plaisir d&rsquo;organiser son colloque &eacute;tudiant.</p>
<p>Cette initiative s&rsquo;inscrit dans le cadre du concours de blogue de la Chaire L.R. Wilson, auquel ont particip&eacute; des &eacute;tudiants inscrits aux programmes de baccalaur&eacute;at, de ma&icirc;trise ou de doctorat.</p>
<p>&Agrave; cette occasion, plusieurs &eacute;tudiants viendront pr&eacute;senter leurs contributions. Au terme du processus d&rsquo;&eacute;valuation, les trois meilleures seront s&eacute;lectionn&eacute;es pour publication sur le blogue de la Chaire L.R. Wilson. Des prix seront &eacute;galement remis aux auteurs de ces trois contributions, en reconnaissance de la qualit&eacute; de leurs r&eacute;flexions. &#128184;&#128184;</p>
<p>&#127863; &Eacute;videmment, un cocktail de l&rsquo;amiti&eacute; viendra conclure l&rsquo;&eacute;v&eacute;nement. &#127863;</p>
<p>Nous vous attendons nombreux pour venir encourager les &eacute;tudiants et les jeunes chercheurs.</p>]]></content>
	<updated>2026-03-24T13:52:36+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-24T13:52:36+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-23:/283390</id>
	<link href="https://law.stanford.edu/2026/03/23/paraguay-computational-antitrust/" rel="alternate" type="text/html"/>
	<title type="html">The Paraguayan Competition Authority Joins the Stanford Computational Antitrust Project</title>
	<summary type="html"><![CDATA[<p>The Stanford Computational Antitrust Project announces that the Comisi&oacute;n Nacional de la Competencia ...</p>]]></summary>
	<content type="html"><![CDATA[<p>The Stanford Computational Antitrust Project announces that the Comisi&oacute;n Nacional de la Competencia (CONACOM) of Paraguay has joined its network of partner agencies.</p>
<p>CONACOM is the public body entrusted with the application of Paraguay&rsquo;s competition law. Since its establishment, the agency has contributed to the gradual consolidation of competitive conditions across key sectors of the Paraguayan economy. Its activity reflects an institutional trajectory marked by increasing engagement with both domestic enforcement priorities and international cooperation. This evolution is significant. In jurisdictions where competition frameworks are relatively recent, antitrust agencies play a structuring role in shaping market expectations and business conduct. CONACOM&rsquo;s work illustrates how enforcement can operate as a dynamic force.</p>
<p>The partnership with the Stanford Computational Antitrust Project builds on this trajectory. It creates a platform to examine how computational methods can complement existing analytical tools and support evidence-based enforcement. Thibault Schrepel, founder of the Stanford Computational Antitrust Project, stated:</p>
<p>&ldquo;We warmly welcome CONACOM to the project. Paraguay offers a valuable context to study how computational approaches can support competition agencies operating in fast-evolving market environments.&rdquo;</p>
<p>The collaboration will focus on methodological exchanges and applied research. It reflects a joint commitment to strengthening analytical capacity and refining the tools used to evaluate competition.</p>]]></content>
	<updated>2026-03-23T08:00:54+00:00</updated>
	<author><name>CodeX</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-23T08:00:54+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="computational antitrust"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-20:/283203</id>
	<link href="https://www.gautrais.com/presse/reglementation-des-technologies-de-linformation-nous-en-sommes-encore-a-ladolescence/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=reglementation-des-technologies-de-linformation-nous-en-sommes-encore-a-ladolescence" rel="alternate" type="text/html"/>
	<title type="html">Réglementation des technologies de l’information: «Nous en sommes encore à l’adolescence» (UdeMNouvelles, 20 mars 2026)</title>
	<summary type="html"><![CDATA[<p>En 1993, Vincent&nbsp;Gautrais &eacute;tudie le droit &agrave; l&rsquo;Universit&eacute; de Montr&eacute;al, plus pr&eacute;cis&eacute;ment le droit des ...</p>]]></summary>
	<content type="html"><![CDATA[<p>En 1993, Vincent&nbsp;Gautrais &eacute;tudie le droit &agrave; l&rsquo;Universit&eacute; de Montr&eacute;al, plus pr&eacute;cis&eacute;ment le droit des affaires en rapport avec les communications &eacute;lectroniques, et d&eacute;cide que son sujet portera sur les contrats par&hellip; t&eacute;l&eacute;copieur.</p>
<p>Comme beaucoup de choses dans le monde num&eacute;rique, cette d&eacute;cision change rapidement. &laquo;L&rsquo;Internet n&rsquo;avait que quatre ans, mais la technologie &eacute;voluait rapidement, et j&rsquo;ai r&eacute;alis&eacute; que j&rsquo;avais choisi le mauvais sujet pour ma th&egrave;se&raquo;, se souvient le chercheur au Centre de recherche en droit public&nbsp;(CRDP).</p>
<h4><a href="https://nouvelles.umontreal.ca/article/2026/03/19/reglementation-des-technologies-de-l-information-nous-en-sommes-encore-a-l-adolescence" rel="noopener noreferrer" target="_blank"><strong>Pour en savoir +</strong></a></h4>]]></content>
	<updated>2026-03-20T16:42:35+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-20T16:42:35+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-18:/283055</id>
	<link href="https://www.gautrais.com/conferences/6168/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=6168" rel="alternate" type="text/html"/>
	<title type="html">Le numérique à l’épreuve de la sobriété: Enjeux, potentiels d’innovation et nouvelles trajectoires, HEC Montréal, Édifice Hélène-Desmarais (501, rue De la Gauchetière O, Montréal, QC H2Z 1Z5) (18 mars 2026)</title>
	<summary type="html"><![CDATA[<p>Mercredi, 18 mars 2026&nbsp;au&nbsp;Jeudi, 19 mars 2026


Horaire:&nbsp;9h00 &agrave; 17h30


&nbsp;

Programmation

Sessio...</p>]]></summary>
	<content type="html"><![CDATA[<div>
<div></div>
<div>
<div>
<div><time datetime="2026-03-18T12:00:00Z">Mercredi, 18 mars 2026</time>&nbsp;au&nbsp;<time datetime="2026-03-19T12:00:00Z">Jeudi, 19 mars 2026</time></div>
</div>
</div>
<div><span>Horaire:</span>&nbsp;<span>9h00 &agrave; 17h30</span></div>
<div></div>
</div>
<section>&nbsp;</section>
<section>
<h2>Programmation</h2>
</section>
<p><strong>Session 4 &ndash; Sobri&eacute;t&eacute; num&eacute;rique: lectures transversales</strong></p>
<p>Cette session propose une r&eacute;flexion transversale visant &agrave; explorer comment la notion de sobri&eacute;t&eacute; num&eacute;rique vient reprobl&eacute;matiser les th&eacute;matiques existantes (travail, sant&eacute;, &eacute;ducation, arts et m&eacute;dias, droit et &eacute;thique), en r&eacute;v&eacute;lant de nouveaux enjeux et ouvrant de nouvelles pistes de recherche. Chaque axe contribue ainsi &agrave; enrichir la compr&eacute;hension de la sobri&eacute;t&eacute; num&eacute;rique dans ses dimensions &eacute;thiques, sociales, politiques et culturelles.</p>
<p><strong>Pan&eacute;listes</strong></p>
<ul>
<li>Mod&eacute;ration&nbsp;: St&eacute;phane Roche (Universit&eacute; Laval)</li>
<li>Florent Michelot (Universit&eacute; Concordia)</li>
<li>Allison Malchildon (Universit&eacute; Sherbrooke)</li>
<li>Emilie Dionne (Chercheuse, VITAM)</li>
<li>Vincent Gautrais (Universit&eacute; de Montr&eacute;al)</li>
<li>Cl&eacute;mence Varin (Doctorante, Universit&eacute; Laval)</li>
<li>Tania Saba (Universit&eacute; de Montr&eacute;al)</li>
</ul>]]></content>
	<updated>2026-03-18T16:20:25+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-18T16:20:25+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-18:/282976</id>
	<link href="https://law.stanford.edu/2026/03/18/moroccan-competition-council-computational-antitrust/" rel="alternate" type="text/html"/>
	<title type="html">Moroccan Competition Council Joins Stanford Computational Antitrust Project</title>
	<summary type="html"><![CDATA[<p>Morocco&rsquo;s Competition Council (Conseil de la Concurrence) is joining Stanford computational antitru...</p>]]></summary>
	<content type="html"><![CDATA[<p><strong></strong> Morocco&rsquo;s Competition Council (<em>Conseil de la Concurrence</em>) is joining Stanford computational antitrust project  which brings the total number of affiliated competition agencies to over 80 worldwide.</p>
<p>The Stanford Computational Antitrust project, founded and led by Thibault Schrepel (Vrije Universiteit Amsterdam / Stanford CodeX Center for Legal Informatics), is the world&rsquo;s leading initiative at the intersection of competition law and computational methods. It brings together competition agencies and academics to develop empirical and technological tools for modern antitrust enforcement. The project receives no private funding.</p>
<p>Morocco&rsquo;s Competition Council is an independent constitutional institution responsible for ensuring transparency and fairness in economic relations, including the analysis of anti-competitive practices, merger control, and market regulation. Fully operational since 2018, the Council has established itself as one of Africa&rsquo;s most active competition agencies, with a track record of merger decisions and antitrust enforcement that spans digital markets and traditional sectors alike.</p>
<p>&ldquo;We are delighted to welcome the Moroccan Competition Council to the network,&rdquo; said Thibault Schrepel. &ldquo;Their expertise and perspective will strengthen the project&rsquo;s African and Mediterranean representation. Their enforcement experience is directly relevant to the computational challenges we are working to address.&rdquo;</p>
<p>The affiliation deepens the project&rsquo;s engagement across Africa, where competition authorities are increasingly confronting the enforcement challenges raised by digital markets.</p>
<p><strong>About the Stanford Computational Antitrust Project</strong> The Stanford Computational Antitrust project brings together over 80 competition agencies globally to advance empirical and computational approaches to antitrust enforcement. It operates under the Stanford CodeX Center for Legal Informatics and accepts no private funding. More information: <a href="http://www.computationalantitrust.com/" rel="noopener noreferrer" target="_blank">www.computationalantitrust.com</a>.</p>
<p><strong>About Morocco&rsquo;s Competition Council</strong> The <em>Conseil de la Concurrence</em> is Morocco&rsquo;s independent competition agency. It is responsible for ensuring free and fair competition, regulating anti-competitive practices, and advising government and parliament on competition matters.</p>]]></content>
	<updated>2026-03-18T08:01:23+00:00</updated>
	<author><name>CodeX</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-18T08:01:23+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="computational antitrust"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-17:/282930</id>
	<link href="https://law.stanford.edu/2026/03/17/the-ungovernable-machine/" rel="alternate" type="text/html"/>
	<title type="html">The Ungovernable Machine</title>
	<summary type="html"><![CDATA[<p>Recursive self-improvement (RSI) is an active deployment priority at frontier AI companies and is be...</p>]]></summary>
	<content type="html"><![CDATA[<p>Recursive self-improvement (RSI) is an active deployment priority at frontier AI companies and is beginning to diffuse into the broader corporate ecosystem. This post argues that boards of companies deploying RSI already face governance exposure under Delaware&rsquo;s duty of oversight as developed in <em>In re Caremark International Inc. Derivative Litigation</em>, 698 A.2d 959 (Del. Ch. 1996), and refined in <em>Stone v. Ritter</em>, 911 A.2d 362 (Del. 2006), <em>Marchand v. Barnhill</em>, 212 A.3d 805 (Del. 2019), and <em>In re McDonald&rsquo;s Corp. S&rsquo;holder Derivative Litigation</em>, 289 A.3d 343 (Del. Ch. 2023). It maps that exposure against California&rsquo;s SB 53, the NIST AI Risk Management Framework (AI RMF 1.0), and the AI Life Cycle Core Principles (AILCCP), and explains what boards, senior management, and general counsel should do before a court is asked to find the gap. The analysis proceeds in three steps: how <em>Caremark</em> and its progeny apply to RSI architectures; how NIST AI RMF 1.0 and AILCCP translate those duties into specific controls; and how SB 53 and emerging SEC expectations sharpen the board&rsquo;s exposure.</p>
<p><strong>RECURSIVE SELF-IMPROVEMENT</strong></p>
<p>Recursive self-improvement (RSI) refers to an AI system&rsquo;s ability to modify the mechanisms by which it improves itself, in ways that carry forward into future iterations. Many current AI systems use feedback loops to break tasks into subtasks, check intermediate results, and revise their plans mid-run. That is behavioral-level self-correction. The system is adjusting its actions, but its underlying architecture, training rules, and learning procedures remain fixed by human engineers. RSI, by contrast, reaches the architecture itself. A recursively self-improving system can generate and integrate changes to its own code, models, or training procedures, so that later versions are more capable of further self-modification. The improvement compounds. Each cycle makes the next cycle more effective.</p>
<p>This post uses RSI to mean systems that meet three conditions: durable self-modification of the mechanisms that produce intelligence; compounding ability to self-modify across iterations; and limited human gating over the self-improvement loop. It is this combination, not the use of feedback loops alone, that creates the governance exposure this post addresses. In governance terms, the question to ask management is not whether the system uses AI, but whether it can alter its own code or training procedures across releases without human review of each material change, and whether those changes are logged in a way the company can reconstruct.</p>
<p>RSI is not something in some undefined distant future. It is an active commercial and technical priority. Prominent researchers and senior industry figures, including Dario Amodei and Eric Schmidt, have stated publicly that RSI is already being built and deployed.</p>
<p>A system that improves its own performance between deployments reduces iteration costs, compresses competitive timelines, and compounds capability gains in ways that additional headcount cannot replicate. Autonomous optimization allows a system to scale beyond the constraints of human-designed training pipelines, reaching capability levels that manual iteration cannot practically achieve in competitive timeframes.</p>
<p>Alongside this capability three risk patterns have received attention in the technical literature. The first is <em>behavioral drift</em>. When an agent recursively trains on its own synthetically generated outputs without sufficient grounding in human-generated data, it enters a feedback loop that progressively severs the connection between its behavior and human norms. The practical consequence is a system whose outputs become self-referential and increasingly detached from the tasks it was built to perform. The second is <em>self-poisoning</em>. Minor errors, hallucinated facts, and embedded biases do not wash out across iterations. They compound. Knowledge degrades not suddenly but cumulatively, across a sequence of individually small distortions. The third is <em>goal subversion</em>. The recursive architecture creates a surface for manipulation. Intermediate instructions, whether injected by an attacker or generated by emergent system errors, can redefine the agent&rsquo;s objectives incrementally across cycles. The drift accumulates until the system is pursuing something materially different from its original mandate.</p>
<p>And there is a deeper problem. RSI may be able to circumvent the oversight mechanisms imposed on it, not by breaking them, but by influencing the evaluators, the auditors, misrepresenting its own capabilities, or evolving faster than any review process can track. This is the control problem that Nick Bostrom, Stuart Russell, and Roman Yampolskiy have each written and talked about at length. A system optimizing for a goal can develop instrumental sub-goals, among them self-preservation and resistance to shutdown, that make it actively resistant to the kind of oversight board-level monitoring requires.</p>
<p>RSI is not limited to frontier labs. Whether a deployment meets the three conditions depends on facts, not labels. Agentic development tools like Claude Code and OpenAI Codex allow software firms of any size to deploy recursive loops that can maintain and extend their own codebases. Whether those loops produce durable self-modification with limited human gating is a question about the specific implementation. Companies in chip design, biotech, and financial services are running AI-driven systems that recursively refine their own algorithms cycle by cycle; some of those systems will meet the conditions and some will not. For companies in retail, logistics, and finance, emerging RSI-style capability is arriving not as internally developed software but as an API integration. A logistics company whose routing agent rewrites its own scheduling code overnight may be running a system that meets all three conditions whether or not it uses that term.</p>
<p>Any deployment that meets those conditions presents the governance exposure this post describes, regardless of whether the company considers itself an AI company. The governance exposure follows the conditions, not the label and boards outside the frontier tier should not assume the question does not reach them. As I explain in more detail below, from a <em>Caremark</em> perspective, a mid-market logistics firm running a self-rewriting routing agent may present a cleaner test case than a research lab advertising frontier AI.</p>
<p><strong>THE GOVERNANCE PROBLEM</strong></p>
<p>The governance conversation around RSI frames the problem as complexity. Systems iterate faster than humans can track. Architectures become illegible. Audit trails thin out. Complexity is not the problem. The system&rsquo;s structural ungovernability is. And in this case, structural ungovernability is a design choice. Design choices like disabling immutable logs for performance reasons, omitting human approval gates on self-modifying actions, or allowing models to promote their own code changes into production without dual control are what make ungovernability structural rather than incidental. Section 3.1.5 &ldquo;Resource Requirements&rdquo; in NIST AI 800-4 documents that the organizational logic behind those choices is consistent across the industry: comprehensive monitoring is expensive, scaling it is computationally intensive, and qualified AI experts who can oversee it are difficult to find. A company that builds RSI without adequate monitoring infrastructure is not simply being careless. It is making a rational economic decision to forgo a costly function. That economic rationality is precisely what makes the governance failure deliberate rather than inadvertent, and what makes the bad-faith analysis tractable rather than speculative.</p>
<p>Corporate law has a framework for this problem. Delaware&rsquo;s duty of oversight, developed through a line of cases beginning with <em>In re Caremark International Inc. Derivative Litigation</em>, 698 A.2d 959 (Del. Ch. 1996), holds that directors can face liability not only for bad decisions but for failing to build the systems through which material risks are reported to the board. The question is not whether the board understood the risk. It is whether the board ensured it would be told about it.</p>
<p>The absence of an AI-specific compliance baseline makes that obligation acute. In other regulated domains, corporate law and securities regulation establish a minimum floor: audit committee composition and charter requirements, codes of ethics, insider trading policies, clawback provisions. No equivalent floor exists for AI governance. Regulation is fragmented and lags the technology. The board&rsquo;s oversight obligation for RSI therefore, rests on the central question of whether it made a good-faith effort to establish board-level reporting systems adequate to the mission-critical risks the company was running.</p>
<p>Whether management has established change control, immutable logging, and human-in-the-loop constraints for an RSI deployment is not merely a technical question, but also a governance one. A board that receives no reporting on whether those systems exist, and asks no questions about them, may have failed to maintain the oversight infrastructure that <em>Caremark</em> demands.</p>
<p><strong>THE DOCTRINAL FRAMEWORK</strong></p>
<p>Delaware&rsquo;s oversight doctrine asks one question: did the board build a system that would have told it about material risks? <em>Stone v. Ritter</em>, 911 A.2d 362 (Del. 2006), embedded <em>Caremark</em> in the duty of loyalty via bad faith and established the doctrine&rsquo;s two-pronged structure. Under the first prong, directors may be liable where they utterly fail to implement a reasonable board-level information and reporting system. Under the second, having implemented such systems, directors may be liable if they consciously fail to monitor operations in the face of red flags. Because the doctrine sounds in loyalty-based bad faith, plaintiffs must plead a knowing failure to act, not mere negligence. That standard has a practical consequence worth noting: even directors who are shielded from duty-of-care liability by a Delaware General Corporation Law (DGCL) &sect; 102(b)(7) charter provision remain exposed to a sustained or systematic oversight failure.</p>
<p>Marchand<em> v. Barnhill</em>, 212 A.3d 805 (Del. 2019), clarified when prong one is adequately pled. A company in a domain with mission-critical risks must have a board-level system that brings those risks to its directors. For a company whose core product or platform depends on RSI, safety and controllability are a plausible candidate for that treatment. But no court has yet so held.&nbsp;The difficulty is that <em>Marchand</em>&nbsp;arose in a context of immediate physical safety risk and established regulatory exposure, and Delaware courts have not automatically extended the mission-critical rubric to software-based risks. Whether a court applies it to RSI depends on what the board knew, when it knew it, and whether it established a reporting structure adequate to surface those risks. That assessment is made from the position of the board at the time of deployment, not in hindsight.</p>
<p>The mission-critical rubric does not require a monoline structure. RSI is not a product. It is the backend process that generates, maintains, and modifies products. Its governance relevance is systemic, not product-specific. A company with ten distinct product lines, each running on an RSI backend, faces greater exposure from an RSI failure than a monoline company, because the failure propagates across every line simultaneously. The <em>Marchand</em> inquiry is whether the risk is central to the company&rsquo;s operations, not whether the company sells a single product. Where RSI is the architecture underlying a company&rsquo;s core systems, its safety and controllability are central to everything the company does. A diversified company cannot argue that an RSI failure is a localized business loss. California&rsquo;s Transparency in Frontier Artificial Intelligence Act (SB 53) reinforces that conclusion for covered developers by mandating a Frontier AI Framework and periodic catastrophic-risk reporting regardless of product diversity. For those companies, the board&rsquo;s oversight duty for RSI is also anchored in statutory compliance rather than any inference from business structure.</p>
<p>Post-<em>Marchand</em> cases confirm the trajectory. <em>In re Clovis Oncology</em>, No. 2017-0222-JRS (Del. Ch. Oct. 1, 2019), applied the mission-critical logic to a drug company&rsquo;s failure to monitor FDA compliance for its flagship product. <em>Teamsters v. Chou</em>, No. 2019-0816-SG (Del. Ch. Aug. 24, 2020), arose from AmerisourceBergen&rsquo;s operation of an illegal oncology drug repackaging program through a subsidiary; the board received and ignored years of compliance red flags, including a Department of Justice subpoena, before incurring criminal and civil penalties that together totaled $885 million across separate proceedings. The court found a substantial likelihood of <em>Caremark</em> liability where actual board-level information flow was absent on a mission-critical compliance domain, even where management was aware of the problems.</p>
<p>Two further cases extend the analysis. <em>Hughes v. Hu</em>, No. 2019-0112-JTL (Del. Ch. Apr. 27, 2020), involved Kandi Technologies, a Delaware-incorporated electric vehicle components manufacturer, where the audit committee received years of auditor warnings about related-party transaction irregularities and a material weakness in financial reporting, and failed to act; the court rejected trappings of oversight as a safe harbor and held that chronic committee deficiencies and failure to follow up on irregularities can ground both prongs. <em>In re Boeing Co. Derivative Litig</em>., No. 2019-0907-MTZ (Del. Ch. Sept. 7, 2021), brought both prongs to bear on a single fact pattern of insufficient reporting infrastructure at authorization, followed by conscious disregard of safety drift once deployment began. Design choices that disable board-level monitoring can ground <em>Caremark</em> liability. In an RSI context, those design choices include allowing self-modification that bypasses change-management workflows, or architecting systems so that code and model histories cannot be reconstructed for board or regulator-facing investigations.</p>
<p>A related academic argument points in the same direction. In their article &ldquo;AI &amp; the Business Judgment Rule: Heightened Information Duty,&rdquo; Helleringer and M&ouml;slein argue that the business judgment rule&rsquo;s (BJR) &ldquo;reasonably informed&rdquo; standard may evolve as AI monitoring tools become more capable and more accessible. They call this the AI judgment rule. Their argument is that decisions made without the support of available AI tools may no longer satisfy BJR, and they extend that reasoning to monitoring specifically: AI can and should augment the continuous oversight directors are expected to configure.</p>
<p><em>Caremark</em> and the AI judgment rule do not duplicate each other. <em>Caremark</em> sounds in the duty of loyalty via bad faith. The AI judgment rule sounds in the duty of care via inadequate information. The AI judgment rule is not established precedent or codified doctrine; it is an academic argument about where the BJR&rsquo;s &ldquo;reasonably informed&rdquo; standard is heading. Treating it as coordinate authority with <em>Caremark</em> overstates the current legal risk. What they share is a governance implication. A board that failed to establish a reporting system for RSI safety and controllability faces potential exposure under both frameworks as each continues to develop.</p>
<p><strong>THE ARCHITECTURE PROBLEM</strong></p>
<p>Each RSI self-modification cycle overwrites the artifact chain that connects a model&rsquo;s output to a traceable decision and a responsible party. Absent immutable logging and lineage controls, RSI can progressively erode explainability to the point where it is no longer credible in practice. The first casualty is senior management&rsquo;s own audit capacity. The board does not conduct technical audits directly; it depends on management to perform that function and surface the results. When the artifact chain is gone, management has nothing to audit, and therefore nothing to report. A board that received no reporting on whether management had established those controls, and had established no committee structure through which management was required to deliver that assurance, may have allowed the conditions for its own oversight to be designed away. NIST AI 800-4 &sect; 3.1.5 confirms this is not a hypothetical failure mode. It documents fragmented logging across distributed infrastructure, resource constraints on comprehensive monitoring, and the difficulty of hiring and training qualified AI experts as confirmed barriers to post-deployment AI system monitoring across the industry. The governance gap the board faces is not a gap that management simply failed to notice. It is a gap that the economics and workforce realities of AI deployment make predictable, and one that a board exercising reasonable oversight would have required management to address explicitly before deployment.</p>
<p>The NIST AI Risk Management Framework (AI RMF 1.0) anchors this argument in widely accepted guidance. NIST&rsquo;s GOVERN, MAP, MEASURE, and MANAGE functions call for standardized documentation, provenance tracking, model inventories, change management, monitoring, and incident response. These functions are voluntary guidance, not positive law, but they are increasingly receiving legislative attention and deserve a heightened level of attention. Courts generally defer to a board&rsquo;s business judgment on which systems to implement, provided some reasonable system exists. What NIST AI RMF 1.0 supplies is evidence of industry-recognized practices that will likely inform a court&rsquo;s assessment of reasonableness; it does not displace the business judgment rule on implementation choices, and a board&rsquo;s failure to adopt any particular control does not automatically constitute a systematic oversight failure.</p>
<p>The AILCCP, which I developed and maintain as part of my research at Stanford Law School, names three specific controls directly implicated in RSI governance, a Human Approval Gate for Sensitive Actions, sandboxing requirements, and immutable logging. Each targets a distinct point in the RSI loop where oversight can be disabled: the approval gate prevents unauthorized self-modification from executing, sandboxing contains its scope, and immutable logging preserves the record of what occurred. Together they define the conditions under which oversight can function at all. The AILCCP also establishes an Enabling principle that governs how those conditions connect to board-level responsibility. Under that principle, the board&rsquo;s oversight inquiry is whether directors required management to establish and report on those conditions, or whether they accepted deployment without that assurance. Read alongside NIST AI RMF 1.0, these controls provide a practical reference point for what adequate management-level RSI governance looks like. Neither framework, however, is positive law. Courts apply business judgment deference to a board&rsquo;s selection among governance approaches, and the absence of any particular control is not, standing alone, a systematic failure. But what these frameworks supply is a baseline against which a court can assess whether some reasonable system existed at all.</p>
<p>A board that received no documentation that management had implemented those controls, and established no reporting system to surface that gap, has a governance problem that a complexity argument alone will not cure. The more demanding question is whether the record supports bad faith pleaded with particularity. As we will see, a governance failure, standing alone, does not meet that threshold. What changes the analysis is evidence that directors were specifically advised of the risk and chose to proceed without requiring adequate reporting.</p>
<p>Finally, the Helleringer and M&ouml;slein AI judgment rule adds a structural observation. When engineered with robust observability, RSI systems generate exactly the kind of structured, high-volume operational data that AI-augmented monitoring handles most effectively, including change logs, output drift metrics, lineage records, and safety constraint adherence. The board&rsquo;s governance obligation is not to understand the technical architecture. It is to require that management deploy adequate monitoring tools and report the results through a functioning board committee. The board asks the governance question. Management answers it.</p>
<p><strong>THE OFFICER PROBLEM</strong></p>
<p><em>Caremark</em> exposure does not end at the board. <em>In re McDonald&rsquo;s Corp. S&rsquo;holder Derivative Litigation</em>, 289 A.3d 343 (Del. Ch. 2023), arose from the termination of McDonald&rsquo;s Chief People Officer amid allegations of sexual misconduct and a pattern of workplace culture failures at the company. The court recognized that corporate officers owe a duty of oversight within their areas of responsibility, requiring them to make a good-faith effort to establish information systems and to elevate red flags to the board.</p>
<p>The CTO who designed the RSI architecture and the Chief AI Officer who approved the training roadmap share that exposure. Their authority over that design is precisely the domain where <em>McDonald&rsquo;s</em> attaches. A loyalty-based oversight theory reaches them directly, alongside the board. For those officers, a red flag may be as simple as an internal report that self-modification has begun erasing logs or that safety metrics have drifted outside documented tolerances, without any corresponding escalation to the risk or audit committee.</p>
<p><strong>A SINGLE FRAMING DISCIPLINE</strong></p>
<p><em>In re SolarWinds Corp. Derivative Litigation</em>, No. 2021-0307-PVG (Del. Ch. Sept. 6, 2022), arose from the 2020 cyberattack in which threat actors compromised SolarWinds&rsquo; software update mechanism and used it to infiltrate the networks of thousands of customers, including multiple federal agencies. Shareholders brought <em>Caremark</em> claims alleging the board had failed to oversee the company&rsquo;s cybersecurity risks. But Delaware has not imposed <em>Caremark</em> liability for failure to monitor pure business risk absent bad-faith disregard of red flags or violations of positive law. The Delaware Court of Chancery dismissed the oversight claims, and the Delaware Supreme Court affirmed, on the ground that the complaint failed to plead particularized facts showing bad faith. The case establishes that the bad-faith threshold must be pled with particularity, and that framing the risk as a compliance or safety obligation rather than a business judgment call is the more durable path.</p>
<p>I read that precedent as requiring one discipline in framing this argument: general counsel must frame RSI safety and controllability for the board as a compliance and safety obligation, not as a category of business risk. The more the record shows directors treating RSI as an operational efficiency project, the closer the fact pattern comes to&nbsp;<em>SolarWinds</em>&nbsp;and the harder it will be to plead bad faith. The general counsel&rsquo;s framing is strongest where the record shows that directors were advised that specific design decisions would progressively render the system unmonitorable and chose to proceed without requiring adequate controls. That is the fact pattern where bad faith is pleadable with particularity.</p>
<p>California&rsquo;s SB 53 sharpens the framing discipline for covered developers. The statute applies to frontier models trained above a 10&sup2;&#8310; FLOP-scale compute threshold and defines &ldquo;critical safety incidents&rdquo; to include a model that uses deceptive techniques to subvert developer controls in a way that materially increases catastrophic risk. Covered developers must publish a Frontier AI Framework documenting how they assess and mitigate catastrophic risks, including the risk that models circumvent internal oversight mechanisms, and must periodically report summaries of catastrophic-risk assessments from internal use to California&rsquo;s Office of Emergency Services (OES). RSI experiments constitute internal use before any public deployment and therefore, fall within that reporting scope. For a board at a covered company, effective RSI governance is now part of a statutory compliance obligation.</p>
<p>That obligation lands where the legal exposure already runs. The companies operating closest to the compute and algorithmic thresholds at which RSI becomes a realistic deployment priority are almost all incorporated in Delaware, placing them under Delaware&rsquo;s fiduciary duty regime. OpenAI, Anthropic, Google DeepMind, and Meta maintain their primary research operations and headquarters in California, placing them within SB 53&rsquo;s territorial reach. SB 53 and <em>Caremark</em> do not govern different companies; for the most capable frontier developers, they govern the same board.</p>
<p>For covered developers, a failure to comply with SB 53&rsquo;s reporting obligations may generate regulatory penalties from California&rsquo;s OES. That California exposure is separate from Delaware derivative liability. A failure to report to OES does not automatically satisfy&nbsp;<em>Caremark</em>&lsquo;s bad-faith standard, and plaintiffs invoking SB 53 in derivative litigation should treat it as one factor in a particularized factual record, not as independent grounds for oversight liability.</p>
<p>For companies below SB 53&rsquo;s compute threshold, the statute does not apply. There is no reporting obligation and no OES exposure. A plaintiff bringing a&nbsp;<em>Caremark</em>&nbsp;claim against one of those companies cannot point to SB 53 as evidence of a compliance failure. The bad-faith argument must be built entirely from what the board knew about RSI risks and what it chose to do about them.</p>
<p>The general counsel&rsquo;s job is therefore, to advise the board that SB 53 exists, that RSI is within its scope, and that the board must receive documentation adequate to confirm management&rsquo;s compliance. A board that was never told by counsel that SB 53 created these obligations faces a different exposure than one that was told and ignored it. Both have a governance problem. Only the second has a bad faith problem.</p>
<p><strong>THE IMPLICATION</strong></p>
<p>Senior management must know that establishing the policies, procedures, processes, and practices governing traceability, logging, lineage, change control, and human approval for any RSI deployment is their obligation. The board verifies that senior management has discharged it. Documentation, model inventories, and incident response must be real and must reach directors. When red flags emerge, including self-modification that erases logs or unexplained drift in safety metrics, the board should interrogate rather than accept black-box assurances. For covered developers under SB 53, the general counsel bears a specific responsibility in that chain to ensure the board understands that RSI governance is a compliance obligation, that the Frontier AI Framework required by the statute addresses RSI risks explicitly, and that the board is receiving the reporting it needs to confirm management&rsquo;s adherence. A general counsel who never briefed the board on SB 53&rsquo;s application to the company&rsquo;s RSI program has not discharged that responsibility. At a minimum, the board should instruct management to produce a single RSI governance pack summarizing architecture, logging and lineage controls, human approval gates, incident response plans, and SB 53 reporting posture, and to update it at a cadence the board sets.</p>
<p>The exposure does not end with derivative litigation. On December 4, 2025, the SEC&rsquo;s Investor Advisory Committee issued a formal recommendation that public companies disclose how they define AI, what board oversight mechanisms govern AI deployment, and the material effects of AI on their operations. The recommendation is advisory, not binding rulemaking, but public companies should expect pressure from investors and proxy advisers to respond in advance of any formal rule. A board that permitted management to deploy an RSI architecture without adequate oversight infrastructure cannot answer those questions without revealing the gap. The <em>Caremark</em> claim and the disclosure obligation now run in parallel, and the same deficiency feeds both.</p>
<p>Three frameworks now bear on the governance gap that RSI creates. Delaware&rsquo;s oversight doctrine under <em>Caremark</em> and its progeny is established law. The AI judgment rule is a theoretical trajectory courts have not yet adopted. SB 53 has added a statutory compliance obligation that makes the governance gap visible to a general counsel before any court is asked to find it. No case has yet been brought, but the legal framework is in place.</p>]]></content>
	<updated>2026-03-17T17:50:30+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-17T17:50:30+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="ai liability"/>

	<category term="board of directors"/>

	<category term="eran kahana"/>

	<category term="frontier ai"/>

	<category term="rsi"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-16:/282832</id>
	<link href="https://www.gautrais.com/conferences/regime-de-responsabilite-de-lia-a-travers-le-spectre-de-lassurabilite/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=regime-de-responsabilite-de-lia-a-travers-le-spectre-de-lassurabilite" rel="alternate" type="text/html"/>
	<title type="html">Régime de responsabilité de l’IA à travers le spectre de l’assurabilité, A-3421 (Faculté de droit UdeM) + Zoom(16 mars 2026)</title>
	<summary type="html"><![CDATA[<p>Cette conf&eacute;rence organis&eacute;e par l&rsquo;OBVIA propose des r&eacute;flexions pr&eacute;liminaires, mais prospectives...</p>]]></summary>
	<content type="html"><![CDATA[<p>Cette conf&eacute;rence organis&eacute;e par l&rsquo;OBVIA propose des r&eacute;flexions pr&eacute;liminaires, mais prospectives sur la question &eacute;mergente de la responsabilit&eacute; li&eacute;e &agrave; l&rsquo;IA. Elle est motiv&eacute;e par la d&eacute;cision de l&rsquo;Union europ&eacute;enne d&rsquo;abandonner le projet de directive sur la responsabilit&eacute; en mati&egrave;re d&rsquo;intelligence artificielle, ainsi que par la multiplication de litiges aux &Eacute;tats-Unis concernant des pr&eacute;judices pr&eacute;tendument caus&eacute;s par des agents conversationnels d&rsquo;IA.</p>
<p>Partant des risques concrets d&eacute;j&agrave; associ&eacute;s aux syst&egrave;mes d&rsquo;IA, la conf&eacute;rence offre un bref aper&ccedil;u des pratiques d&rsquo;assurance naissantes et des r&eacute;ponses du march&eacute; visant &agrave; couvrir les dommages li&eacute;s &agrave; l&rsquo;IA. La th&egrave;se centrale est que les futurs cadres de responsabilit&eacute; en mati&egrave;re d&rsquo;IA devraient &ecirc;tre con&ccedil;us non seulement autour d&rsquo;objectifs familiers tels que la dissuasion et l&rsquo;indemnisation des victimes, mais aussi avec une attention explicite &agrave; l&rsquo;assurabilit&eacute;.</p>
<p>En abordant la responsabilit&eacute; sous cet angle, la conf&eacute;rence explore ce &agrave; quoi pourrait ressembler un r&eacute;gime de responsabilit&eacute; en IA &agrave; la fois efficace et coh&eacute;rent.</p>
<h4><strong>ATTESTATION DE FORMATION CONTINUE&nbsp;</strong></h4>
<div>
<p>Une attestation de participation, mentionnant une dur&eacute;e de 1 h 30 de formation, sera produite pour les personnes inscrites &agrave; l&rsquo;activit&eacute; sur <a href="https://fcdroit.umontreal.ca/Web/MyCatalog/ViewP?pid=OPWhgFdTt9fynJhm%2fIXQ4A%3d%3d&amp;id=4cZfvq367bmGHsCjVorJTQ%3d%3d&amp;cvState=cvDate=03-03-2026" rel="noopener noreferrer" target="_blank">FCDroit.umontreal.ca</a>, sous r&eacute;serve de la compl&eacute;tion des formalit&eacute;s administratives requises.</p>
<p>Cette attestation sera d&eacute;pos&eacute;e sur FCDroit.umontreal.ca, au dossier du participant pr&eacute;sent en ligne ou sur place.</p>
</div>
<h4><strong>CONF&Eacute;RENCIER</strong></h4>
<div><strong><a href="https://www.gautrais.com/files/sites/185/2026/03/GSELL.png" rel="noopener noreferrer" target="_blank"><img decoding="async" src="https://www.gautrais.com/files/sites/185/2026/03/GSELL-475x475.png" alt="" srcset="https://www.gautrais.com/files/sites/185/2026/03/GSELL-475x475.png 475w,https://www.gautrais.com/files/sites/185/2026/03/GSELL-225x225.png 225w,https://www.gautrais.com/files/sites/185/2026/03/GSELL-768x768.png 768w,https://www.gautrais.com/files/sites/185/2026/03/GSELL-725x725.png 725w,https://www.gautrais.com/files/sites/185/2026/03/GSELL.png 960w,https://www.gautrais.com/files/sites/185/2026/03/GSELL-475x475.png 475w,https://www.gautrais.com/files/sites/185/2026/03/GSELL-225x225.png 225w,https://www.gautrais.com/files/sites/185/2026/03/GSELL-768x768.png 768w,https://www.gautrais.com/files/sites/185/2026/03/GSELL-725x725.png 725w,https://www.gautrais.com/files/sites/185/2026/03/GSELL.png 960w" sizes="(max-width: 157px) 100vw, 157px" referrerpolicy="no-referrer" loading="lazy"></a>Florence G&rsquo;sell </strong>est professeure invit&eacute;e &agrave; l&rsquo;Universit&eacute; Stanford, o&ugrave; elle dirige le Programme sur la gouvernance des technologies &eacute;mergentes au Tech Impact and Policy Center (Freeman Spogli Institute). Elle est professeure de droit priv&eacute; &agrave; l&rsquo;Universit&eacute; de Lorraine (actuellement en cong&eacute;), membre de l&rsquo;AI and Society Institute (ENS-PSL) et chercheuse associ&eacute;e au Centre for Digital Law (Singapore Management University). De 2019 &agrave; 2025, elle a dirig&eacute; la chaire Gouvernance et souverainet&eacute; num&eacute;riques &agrave; Sciences Po (Paris).<br>Ses publications r&eacute;centes comprennent Regulating under Uncertainty. Governance Options for Generative AI (Stanford Cyber Policy Center, 2024)&nbsp;; Statutory Obsolescence in the Age of Innovation: a Few Thoughts about GDPR (Network Law Review, septembre 2025) et Balancing Code and Law: Governance and Policy Challenges of Blockchain (&agrave; para&icirc;tre).</div>
<div></div>
<div><strong><a href="https://www.gautrais.com/files/sites/185/2026/03/vermeys.jpeg" rel="noopener noreferrer" target="_blank"><img decoding="async" src="https://www.gautrais.com/files/sites/185/2026/03/vermeys.jpeg" alt="" referrerpolicy="no-referrer" loading="lazy"></a>Nicolas Vermeys,&nbsp;</strong>LL.D. (Universit&eacute; de Montr&eacute;al), LL.M. (Universit&eacute; de Montr&eacute;al), CISSP, est directeur du Centre de recherche en droit public (CRDP), directeur adjoint du Laboratoire de cyberjustice et professeur &agrave; la Facult&eacute; de droit de l&rsquo;Universit&eacute; de Montr&eacute;al. Il est &eacute;galement professeur invit&eacute; aux facult&eacute;s de droit de William &amp; Mary (E-U) et de l&rsquo;Universit&eacute; de Fortaleza (Br&eacute;sil).<br>
Me Vermeys est membre du Barreau du Qu&eacute;bec et poss&egrave;de aussi une certification en s&eacute;curit&eacute; informationnelle (CISSP) d&eacute;cern&eacute;e par (ISC)2. Il est l&rsquo;auteur de nombreuses publications portant principalement sur les incidences des technologies de l&rsquo;information sur le droit, dont les ouvrages Droit codifi&eacute; et nouvelles technologies&nbsp;: le Code civil (Yvon Blais, 2015) et Responsabilit&eacute; civile et s&eacute;curit&eacute; informationnelle (&Eacute;ditions Yvon Blais, 2010). Me Vermeys s&rsquo;int&eacute;resse particuli&egrave;rement aux questions juridiques li&eacute;es &agrave; l&rsquo;intelligence artificielle, &agrave; la s&eacute;curit&eacute; de l&rsquo;information, aux d&eacute;veloppements en mati&egrave;re de cyberjustice et, plus g&eacute;n&eacute;ralement, aux incidences des innovations technologiques sur le droit, th&egrave;mes sur lesquels il est r&eacute;guli&egrave;rement invit&eacute; &agrave; intervenir aupr&egrave;s des m&eacute;dias et dans le cadre de conf&eacute;rences prononc&eacute;es pour les juges, avocats, regroupements professionnels et organismes gouvernementaux au Canada et &agrave; l&rsquo;&eacute;tranger.</div>
<div>
<div>
<div>
<div></div>
</div>
</div>
</div>
<p><strong>D&Eacute;ROULEMENT</strong></p>
<ul>
<li>Lundi 16 mars 2026</li>
<li>D&eacute;bute &agrave; 17 h</li>
<li>Sur place, Facult&eacute; de droit,&nbsp;A-3464 &ndash; Salon Fran&ccedil;ois-Chevrette</li>
<li>En ligne sur Zoom</li>
</ul>
<p><strong>FRAIS D&rsquo;INSCRIPTION</strong></p>
<ul>
<li>Entr&eacute;e libre</li>
<li>Inscription obligatoire pour l&rsquo;obtention d&rsquo;une attestation de pr&eacute;sence</li>
</ul>]]></content>
	<updated>2026-03-16T14:56:08+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-16T14:56:08+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-16:/282761</id>
	<link href="https://law.stanford.edu/2026/03/16/zimbabwe-computational-antitrust/" rel="alternate" type="text/html"/>
	<title type="html">Zimbabwe’s Competition and Tariff Commission Joins Stanford Computational Antitrust Project</title>
	<summary type="html"><![CDATA[<p>The Competition and Tariff Commission of Zimbabwe has joined the Stanford Computational Antitrust pr...</p>]]></summary>
	<content type="html"><![CDATA[<p>The <a href="https://www.competition.co.zw/" rel="noopener noreferrer" target="_blank">Competition and Tariff Commission of Zimbabwe</a> has joined the <a href="https://law.stanford.edu/computationalantitrust" rel="noopener noreferrer" target="_blank">Stanford Computational Antitrust project</a>. Headquartered in Harare, the Commission enforces competition rules and reviews mergers in the country. The institution also promotes competitive market structures in Zimbabwe&rsquo;s economy. Its participation expands the project&rsquo;s engagement with African competition authorities. Participation by the Competition and Tariff Commission will contribute to the project&rsquo;s collective knowledge on competition enforcement in fast-emerging economies.</p>
<p><em>&ldquo;We are delighted to welcome the Competition and Tariff Commission of Zimbabwe to the project. Their membership strengthens our network&rsquo;s reach across Africa and reflects a shared conviction that computational tools have a critical role to play in the future of competition enforcement. We look forward to collaborating closely with the Commission and to benefiting from its experience and perspective.&rdquo;</em></p>
<p><strong>Dr. Thibault Schrepel, Project Director, Stanford Computational Antitrust</strong></p>
<p>As a member of the network, the Commission will participate in the project&rsquo;s annual workshop, contribute to its annual report, and engage with the scholarly and practitioner community through the <em>Stanford Computational Antitrust</em> journal, the only peer-reviewed publication dedicated to the intersection of antitrust law and computational methods. The Commission will also have access to the tools and datasets shared across the network&rsquo;s global membership.</p>
<p><strong>About the Competition and Tariff Commission of Zimbabwe</strong></p>
<p>The Competition and Tariff Commission of Zimbabwe is the national competition authority responsible for promoting and maintaining competition across Zimbabwe&rsquo;s market. Its mandate covers merger control, the regulation of anti-competitive agreements and abuse of dominance, consumer protection from unfair business conduct, and tariff investigations. The Commission is headquartered in Harare. <a href="https://www.competition.co.zw/" rel="noopener noreferrer" target="_blank">www.competition.co.zw</a></p>
<p><strong>About the Stanford Computational Antitrust Project</strong></p>
<p>The Stanford Computational Antitrust project is hosted by CodeX &ndash; the Stanford Center for Legal Informatics at Stanford University and led by Professor Thibault Schrepel, Associate Professor at VU Amsterdam and Faculty Affiliate at Stanford. Launched in January 2021, the project brings together over 75 antitrust agencies and leading academics from law, economics, and computer science to explore how computational tools can advance competition enforcement. The project publishes the <em>Stanford Computational Antitrust</em> journal and organises an annual conference and workshop. The project receives no private funding.</p>]]></content>
	<updated>2026-03-16T08:30:34+00:00</updated>
	<author><name>Thibault Schrepel</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-16T08:30:34+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="computational antitrust"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-12:/282371</id>
	<link href="https://law.stanford.edu/2026/03/12/computational-antitrust-comesa/" rel="alternate" type="text/html"/>
	<title type="html">The COMESA Joins the Stanford Computational Antitrust project</title>
	<summary type="html"><![CDATA[<p>The Stanford Computational Antitrust project announces that the COMESA Competition and Consumer Comm...</p>]]></summary>
	<content type="html"><![CDATA[<p>The <a href="https://law.stanford.edu/computationalantithttps://law.stanford.edu/computationalantitrust" rel="noopener noreferrer" target="_blank">Stanford Computational Antitrust project</a> announces that the COMESA Competition and Consumer Commission (CCCC) has joined the project as a partner agency. The cooperation establishes a working relationship between the regional competition authority of the COMESA and the research program hosted at Stanford CodeX.</p>
<p>The COMESA Competition and Consumer Commission operates across a regional market composed of countries in Eastern, Northern, Central, and Southern Africa, including Djibouti, Eritrea, Ethiopia, Somalia, Egypt, Libya, Sudan, Tunisia, Comoros, Madagascar, Mauritius, Seychelles, Burundi, Kenya, Malawi, Rwanda, Uganda, Eswatini, Zambia, Zimbabwe, and the Democratic Republic of the Congo. The jurisdiction covers a large economic space in which competition policy plays an increasing role in market integration and economic development.</p>
<p>The collaboration between the COMESA Competition and Consumer Commission and the Stanford Computational Antitrust project will focus on the study and practical deployment of computational tools in competition enforcement. The two institutions will work together in the coming weeks and months to examine how data analysis and artificial intelligence can assist the agency in detecting anticompetitive conduct or monitoring markets.</p>
<p>Thibault Schrepel, creator and director of the Stanford Computational Antitrust project, said: &ldquo;Competition enforcement has entered a new phase in which computation is becoming essential. Partnering with the COMESA Competition and Consumer Commission creates a unique opportunity to experiment with related approaches in a large and diverse economic region. We look forward to working closely together to generate new insights and help shape the future of competition enforcement.&rdquo;</p>
<p>The Stanford Computational Antitrust project looks forward to a sustained collaboration with the COMESA Competition and Consumer Commission and to supporting new initiatives aimed at strengthening competition enforcement across the COMESA region.</p>]]></content>
	<updated>2026-03-12T08:00:24+00:00</updated>
	<author><name>Thibault Schrepel</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-12T08:00:24+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="computational antitrust"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-11:/282287</id>
	<link href="https://law.stanford.edu/2026/03/05/horace-king-lextar-ai-codex-group-meeting-march-5-2026/" rel="alternate" type="text/html"/>
	<title type="html">Horace King – Lextar AI – CodeX Group Meeting – March 5, 2026</title>
	<summary type="html"><![CDATA[<p>Lextar AI: Governance-Grade AI for Legal Reasoning in Regulated Environments
Horace King, Co-Founder...</p>]]></summary>
	<content type="html"><![CDATA[<p><b>Lextar AI: Governance-Grade AI for Legal Reasoning in Regulated Environments</b></p>
<p><span>Horace King, Co-Founder &amp; CEO of <a href="https://lextarai.ca/" target="_blank" rel="noopener noreferrer">Lextar AI</a>, joins the CodeX Group to present his vision for responsible AI in legal practice. Drawing on Canada&rsquo;s directive on automated decision making and U.S. executive frameworks, Horace walks through how Lextar AI is built from the ground up to meet government-grade standards for transparency, explainability, and human accountability.</span></p>
<p><span>Unlike general-purpose AI tools, Lextar AI is a structured legal reasoning platform &mdash; not a chatbot. It breaks legal analysis into 25&ndash;40 explicit, auditable steps, is jurisdiction-aware (currently supporting Canada and the U.S.), and understands legal hierarchy from constitutional law down to policy directives. The goal: defensible work product that lawyers and judges can stand behind.</span></p>
<p><span>In this session, Horace demos the platform, explains how it differs from outcome-simulation tools and generic AI, and takes questions on its RAG architecture, underlying model, training data, and what &ldquo;governance grade&rdquo; actually means in practice.</span></p>
<figure aria-describedby="caption-attachment-560979"><img fetchpriority="high" decoding="async" src="https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026.jpg" alt="Horace King - Lextar AI - CodeX Group Meeting - March 5, 2026" srcset="https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026.jpg 1105w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-300x171.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-1024x585.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-768x439.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-140x80.jpg 140w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-220x126.jpg 220w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026.jpg 1105w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-300x171.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-1024x585.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-768x439.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-140x80.jpg 140w,https://law.stanford.edu/wp-content/uploads/2026/03/horace-king-lextar-ai-codex-group-meeting-march-5-2026-220x126.jpg 220w" sizes="(max-width: 1105px) 100vw, 1105px" referrerpolicy="no-referrer" loading="lazy"><figcaption>Lextar AI</figcaption></figure>
<p><a href="https://youtu.be/_sPXvL2w0CY" rel="noopener noreferrer" target="_blank">Watch CodeX Group Meeting in Youtube</a></p>
<p><strong>Transcript</strong></p>
<p><span>Roland Vogl: Welcome, everyone. Let&rsquo;s get started. Welcome to our CodeX group meeting. We&rsquo;ll hear from Horace King, who is the co-founder and CEO of Lextar AI, which is a governance-grade legal reasoning platform for regulated environments.</span></p>
<p><span>Horace King: I feel deeply honored to be invited to make a presentation today, knowing that all of you are experts in AI and the relationship between AI and law. Today I&rsquo;d like to talk about responsible AI. My presentation is not just a promotional pitch. I would like to talk about my perspective as a businessman running this business and about responsible AI and structured reasoning in regulated decision making.</span></p>
<p><span>I know Vogl and Codex. I look to you as forerunners in responsible AI research. I am Canadian, so I live in Canada. The first unified legal document regarding responsible AI in Canada is the Directive on Automated Decision Making. It includes several rules regarding algorithmic impact assessment, transparency, explainability, human-in-the-loop, quality and bias testing, and auditability.</span></p>
<p><span>In the U.S., there is not just one unified document regarding responsible AI &mdash; there are three: the Executive Order in 2020, the Executive Order in 2025, and the Office of Budget Memorandums 2521 and 2522.</span></p>
<p><span>I started this business &mdash; it was actually incorporated on the first day of this year, but I started designing this product as early as 2024. I always kept in mind that I needed to explore how AI could be used to facilitate or assist lawyers, judges, and others in the legal field. This is not my first startup. In 2020, I built a database of Chinese law in English. That database is still working on the market. I&rsquo;ll come back to why I designed this product in more detail, but to summarize, the responsible AI requirements in government systems should at least include algorithmic impact assessment, transparency, explainability, human-in-the-loop decision making, bias testing, and auditability. I always kept those requirements in mind when designing this product.</span></p>
<p><span>Almost everybody knows that it can be risky for regulated industries &mdash; for lawyers, for judges &mdash; because some lawyers have been sanctioned for using hallucinated authorities, which is essentially AI-generated legal psychosis. Another issue is the black box problem: a lack of traceable reasoning and jurisdictional confusion. AI may sometimes confuse jurisdictions or use outdated sources.</span></p>
<p><span>More and more lawyers are using AI to facilitate their work, but they also face great risk potential. Because automated AI output is used without sufficient verification, there is a lack of accountability &mdash; a serious issue for lawyers and judges. Almost all institutions are now making policies or guidelines regarding AI adoption, and in the absence of governance controls, that gap remains.</span></p>
<p><span>When I designed this AI, I kept several things in mind. The first is that it is designed not to compete with existing legal practice tools, but to complement them. Over the past two decades, technology has largely been used to facilitate legal research &mdash; the retrieval of laws, regulations, policies, and cases. LexisNexis and Westlaw have made great contributions to that, and it is genuinely very helpful.</span></p>
<p><span>When I designed this product, I asked whether I could find something new that could genuinely help people. It seems to me that in legal practice, very few tools have addressed legal drafting and legal reasoning in a way that is transparent and auditable. That is the first thing I kept in mind: to complement existing tools by focusing on the reasoning and drafting stage.</span></p>
<p><span>Second, the system is designed to assist human judgment, not displace it. It is not meant to substitute for or take over the work of a lawyer. Accountability remains human. The system is there only to assist lawyers and judges, to support legal analysis through structured reasoning, to preserve human decision-making authority and accountability, and to make legal work defensible &mdash; by keeping the reasoning trace transparent. You can see how the AI reasoned and how it verified against rules and laws. I will show you an example shortly.</span></p>
<p><span>The system prioritizes structured reasoning over speed. It may take several minutes &mdash; five, seven, or ten &mdash; depending on the complexity of the case.</span></p>
<h3><span>DEMO WALKTHROUGH</span></h3>
<p><span>This is the website, which is already launched. On the first intake page, you can input up to 5,000 words in the text box, or you can browse and upload PDF or Word files for AI to process.</span></p>
<p><span>The AI has role awareness. You can choose the role of the AI as a neutral legal analyst, as counsel representing the applicant, or as counsel representing the respondent. There is also a role for adjudicators &mdash; such as arbitrators, judges, and tribunal members &mdash; but that role is not available on the website to avoid confusion. We are keeping it for visiting professionals.</span></p>
<p><span>The analysis runs through approximately 25 to 40 steps depending on the complexity of the case. When complete, it will show &lsquo;Processing complete &mdash; all analysis complete,&rsquo; and the button will change to &lsquo;Expand All,&rsquo; allowing you to review every step and verification. Different colors are used to indicate AI confidence levels. A 50% confidence rating, for instance, signals that the claim should be verified. The system also makes recommended actions: what to address immediately, what is urgent, what can be deferred, and a conclusion with citations.</span></p>
<h3><span>STRUCTURED LEGAL REASONING</span></h3>
<p><span>In my view, structured reasoning means the system does not jump directly to an answer. Instead, it breaks legal analysis into explicit steps. In a litigation case, it first sorts and structures case materials &mdash; organizing evidence and other materials the litigation lawyer has received from the client. It then organizes legal claims and issues, identifies missing legal elements or claims, tests each required legal element, evaluates supporting evidence, and detects gaps, inconsistencies, contradictions, and unsupported assertions. It then drafts the output and recommends verification steps and corrective actions.</span></p>
<p><span>This is the distinction from generic AI to governance-grade legal AI. We implement what we call a Lex AI reasoning pipeline: the system is designed first to plan, then to retrieve, to verify, to ground, and then to synthesize &mdash; broken into roughly twenty-seven steps.</span></p>
<p><span>The promise of Lex AI is simple: to produce defensible work product, reduce the risk of hallucination, maintain high consistency, and preserve human authority.</span></p>
<p><span>There are currently two different approaches to AI in dispute resolution and regulated decision making. One is outcome simulation. The other is structural legal reasoning &mdash; the approach we adopt. Both approaches have strengths and limitations, and I am not suggesting one is strictly better than the other.</span></p>
<h3><span>Q&amp;A</span></h3>
<p><span>Q: How does it compare to reasoning models in ChatGPT, for example? It breaks out steps and explains what it&rsquo;s doing &mdash; is this the same but just in a legal context, or have you done something specific in terms of training?</span></p>
<p><span>Horace King: When people talk about AI, they often refer to general AI &mdash; chatbots. But Lex AI is not a chatbot. General AI is not jurisdiction-aware and it can reason too broadly, which is why it may hallucinate. Lex AI is designed to reason within a boundary. You first choose a jurisdiction &mdash; right now we support Canada and the U.S. If you input a case where Canadian law would apply but you&rsquo;ve selected U.S. jurisdiction, the system will decline to process it and explain that it doesn&rsquo;t apply.</span></p>
<p><span>Q: Is this a multi-agent reasoning system using RAG and in-context learning?</span></p>
<p><span>Horace King: Yes, we use RAG pipelines. Lex AI differs in that we train it to be aware not just of jurisdictions but of the legal hierarchy &mdash; which law overrides which. It searches first at the constitutional level, then legislation, then regulations, policies, manuals, and directives. It understands the different levels of legal force and effect, and it also understands the precedential weight of cases. We use our own curated legal database. We are not trying to build a comprehensive database like LexisNexis or Westlaw &mdash; we build a targeted database of the required statutes and cases needed to enable the AI to reason well. If any law or case is not found in our database, it alerts the lawyer or judge. There is also a live interaction panel on the right side of the website where users can ask questions about the output, refine the analysis, or make amendments to the case.</span></p>
<p><span>Q: What is the underlying model and what training data was used? What does &lsquo;governance grade&rsquo; actually mean &mdash; do you have a compliance certification?</span></p>
<p><span>Horace King: &lsquo;Governance grade&rsquo; means the system satisfies the responsible AI requirements set by government &mdash; those I described earlier, including explainability, transparency, accountability, and human-in-the-loop. A judge or lawyer can defend their use of the tool by showing the reasoning trace: how the AI reasoned and how it verified against laws and cases. As for training data, it comes from the public domain but is curated with our own taxonomies, downloaded from official government websites. We are currently using an enterprise-grade model &mdash; Microsoft Copilot &mdash; though we have considered using both Copilot and other models as the product evolves.</span></p>
<p>&nbsp;</p>]]></content>
	<updated>2026-03-05T20:58:02+00:00</updated>
	<author><name>CodeX</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-05T20:58:02+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="codex"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-11:/282259</id>
	<link href="https://law.stanford.edu/2026/03/05/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026/" rel="alternate" type="text/html"/>
	<title type="html">Maria Tzevelekou and Mikhail Tzevelekos – Draco – CodeX Group Meeting March 5, 2026</title>
	<summary type="html"><![CDATA[<p>Maria and Mikhail Tzevelekou, co-founders of Draco, join CodeX to discuss how they&rsquo;re using AI...</p>]]></summary>
	<content type="html"><![CDATA[<p><span>Maria and Mikhail Tzevelekou, co-founders of Draco, join CodeX to discuss how they&rsquo;re using AI to tackle one of Europe&rsquo;s most digitally lagging legal systems. Greece ranks last in the EU for legal sector digitalization &mdash; with money laundering cases taking an average of 20 years to resolve and over 100,000 legal documents still undigitized.</span></p>
<p><span>In this session, the sibling duo (a lawyer and an engineer) walk through Draco&rsquo;s architecture: a specialized ontology-augmented retrieval system built for the unique challenges of the Greek language and legal domain. They demo four core modules &mdash; Legal Advisory, Document Drafting, Jurisprudence Research, and Case Intelligence &mdash; and discuss their path from a judiciary-focused tool to a full practitioner platform.</span></p>
<p><span>They also cover data sourcing, liability considerations, differentiation from existing Greek legal tech, and their roadmap for expanding into other civil law jurisdictions across Europe and Latin America.</span></p>
<figure aria-describedby="caption-attachment-560729"><img decoding="async" src="https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026.jpg" alt="Maria Tzevelekou and Mikhail Tzevelekos - Draco - CodeX Group Meeting March 5, 2026" srcset="https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026.jpg 1176w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-300x169.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-1024x576.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-768x432.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-1152x648.jpg 1152w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-142x80.jpg 142w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-220x124.jpg 220w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026.jpg 1176w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-300x169.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-1024x576.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-768x432.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-1152x648.jpg 1152w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-142x80.jpg 142w,https://law.stanford.edu/wp-content/uploads/2026/03/maria-tzevelekou-and-mikhail-tzevelekos-draco-codex-group-meeting-march-5-2026-220x124.jpg 220w" sizes="(max-width: 1176px) 100vw, 1176px" referrerpolicy="no-referrer" loading="lazy"><figcaption>Draco</figcaption></figure>
<p><a href="https://youtu.be/HWW3Lga3htI" rel="noopener noreferrer" target="_blank">Watch CodeX Group Meeting in Youtube</a></p>
<p><strong>Transcript</strong></p>
<p><b>Roland Vogl:</b><span> Welcome, everyone. Let&rsquo;s get started. Welcome to a Codex group meeting. Today we have two presentations. First, we have Maria Tzevelekou and Mikhail Tzevelekos who are the co-founders of Draco. They are siblings, right? Brother and sister?</span></p>
<p><b>Mikhail:</b><span> Yes, that&rsquo;s correct.</span></p>
<p><b>Roland Vogl:</b><span> One of you &mdash; Mikhail &mdash; is a lawyer, and Maria is a technologist.</span></p>
<p><b>Mikhail:</b><span> That&rsquo;s right.</span></p>
<p><b>Roland Vogl:</b><span> Legal innovation runs in the family. We&rsquo;ll hear from you about the AI system you&rsquo;ve built for the Greek judicial system. Over to you, Maria and Mikhail.</span></p>
<p><b>Maria:</b><span> I&rsquo;m Maria, and this is Mikhail.</span></p>
<p><b>Mikhail:</b><span> I practice law, and Maria is an engineer. She&rsquo;s currently a student at the Electrical and Computer Engineering School at the National Technical University of Athens. Together we&rsquo;re building a program called Draco, which aims to accelerate Greek legal justice using AI and modern tools.</span></p>
<p><span>The premise for building this came from firsthand experience. I came up with the idea initially and brought it to Maria while I was working as a law clerk during law school. At the time, I was doing a lot of clerical and repetitive work &mdash; very little actual law. Most of my days were spent sifting through disorganized data and trying to make sense of it so we could build arguments. That was also around the time large language models had just begun to take off, and I started experimenting with them to see how I could make the clerical aspects of my workload faster, so I could actually practice law.</span></p>
<p><b>Maria:</b><span> I also examined the problem more broadly across Greece. What we&rsquo;re going to show you is what this problem looks like from a firsthand perspective.</span></p>
<p><span>The core problem is that millions of legal documents are computationally invisible. The pictures on your screen are from the Athens District Court &mdash; case files, judicial decisions, everything stacked in towering piles of paper. Greece ranks lowest in Europe for digitalization in the legal sector, according to EU statistics from 2024. Specifically, money laundering cases were completed after an average of 20 years &mdash; 5.5 years, compared to an average of roughly 2.5 to 3 years in other European countries.</span></p>
<p><span>To put it in perspective: a judge will take one of those large case files home, physically read through all the filings and evidence, type out the relevant data, and then attempt to render a judgment.</span></p>
<p><b>Mikhail:</b><span> The Greek legal problem breaks down into three aspects. First, there are seven or more disconnected source systems per query &mdash; legal materials spanning both online and physical archives, from Supreme Court archives and the Council of State to administrative courts, private databases, and physical-only archives, with no unified query interface. Second, 90% of working time is spent on information retrieval &mdash; based on interviews we conducted with Supreme Court judges, administrative court judges, and lawyers at major and higher law firms &mdash; largely because most documents exist as scanned PDFs, unstructured text, or handwritten text. Third, more than 100,000 primary legal documents remain undigitized, and Greek legal language carries additional complexity, retaining polytonic influences that make modern OCR systems far more error-prone than their English-language counterparts.</span></p>
<p><b>Maria:</b><span> To tackle this problem, we built a specific architecture for the Greek domain. We ingest raw documents &mdash; PDFs, scans, or typed text &mdash; then apply OCR and extraction using language models fine-tuned for the Greek language. That fine-tuning capability became available roughly three months ago. We then segment that knowledge into structured forms, creating a knowledge graph that our agent system can work with. Finally, we have an evidence-first retrieval system designed to reduce hallucinations and generate accurate responses. It&rsquo;s closer to an ontology-ranked system than a traditional RAG or graph system.</span></p>
<p><span>Our key insight is that standard RAG cannot handle these large documents at the accuracy we need, especially for Greek. In English, one token roughly corresponds to one word. In Greek, one word can occupy two or three tokens. Given a fixed context window, it becomes much harder for the system to understand what&rsquo;s happening &mdash; especially when the model hasn&rsquo;t been trained on large Greek datasets. We drew on the &ldquo;Lost in the Middle&rdquo; paper, which speaks to accuracy degradation across long contexts. Rather than relying purely on text similarity, we built an evidence-first ontology-relationship retrieval system, which more closely mirrors how an actual lawyer processes large documents &mdash; focusing on relationships between concepts rather than raw text.</span></p>
<p><b>Mikhail:</b><span> Here&rsquo;s our first document example. This is heavily redacted, drawn from a real case. What you&rsquo;re looking at is a handwritten doctor&rsquo;s opinion that is part of a case before the Court of Cassation. Documents like this &mdash; among others &mdash; are what users upload to the platform to begin their analysis. Even for someone who reads Greek fluently, this document is extremely difficult to parse.</span></p>
<p><b>Maria:</b><span> Here&rsquo;s a quick overview of the platform. Users log in to Draco, which is typically deployed on-premises for data protection reasons &mdash; in Europe, GDPR compliance is non-negotiable. There are several suites available.</span></p>
<p><span>The </span><b>Legal Advisory</b><span> suite allows users to work within a specific database &mdash; either one they&rsquo;ve built or one we&rsquo;ve built with them &mdash; and retrieve knowledge exclusively from that database to avoid hallucinations.</span></p>
<p><span>The </span><b>Legal Drafting</b><span> platform is closer to the general-purpose tools available on the market. Users can specify facts from a case and draft documents, drawing on the retrieval methods described above.</span></p>
<p><span>The </span><b>Jurisprudence Suite</b><span> connects to an external database of the user&rsquo;s choosing and searches it using the same methods.</span></p>
<p><span>The </span><b>Case Intelligence</b><span> module is more of a graph-based tool, which we&rsquo;ll show further.</span></p>
<p><b>Mikhail:</b><span> Users begin by creating a new case file, uploading all of their unstructured documents, naming the case, and classifying it however they choose. This example involves a request for a stay of an administrative penalty before the Court of Cassation. The user has uploaded a large main document &mdash; a scanned copy of the motion &mdash; along with supporting documents.</span></p>
<p><span>The first step is handled by a proprietary agent that extracts the ontology layer from the uploaded data. This serves two purposes: it enables the ontology-augmented retrieval for subsequent queries, and it gives users a visual summary of key elements they may have overlooked. This layer is dynamic &mdash; an agent rebuilds it for each case.</span></p>
<p><b>Maria:</b><span> The Legal Advisory suite functions much like a case-specific assistant, grounded entirely in the documents the user has uploaded. Every answer cites a specific passage from those documents. We&rsquo;ve also built in a refusal mechanism: if a question cannot be answered based on the uploaded facts, the agent will say so explicitly rather than fabricating a response.</span></p>
<p><span>We&rsquo;re translating some of the query examples into English here so you can follow along. In this example, the user asks a specific question and the system surfaces all relevant arguments present in the case, with citations. Certain portions are redacted, but you get the idea.</span></p>
<p><b>Maria:</b><span> In the Document Drafting suite, users can request drafts of any document type. What we&rsquo;ve done differently here addresses a common complaint from colleagues and judges: whenever they use AI platforms to draft something in Greek, the phrasing sounds awkward. We believe this is because these models are trained predominantly in English and attempt to reverse-engineer Greek, producing unsatisfying results.</span></p>
<p><span>Our solution is to allow users to upload documents they&rsquo;ve previously written themselves. The model can then adapt to that user&rsquo;s preferred style. We also intentionally limit this to the user&rsquo;s best drafts &mdash; most colleagues told us they want the system to emulate their finest writing, not their rough work.</span></p>
<p><b>Mikhail:</b><span> The styling function analyzes all documents the user provides and extracts a linguistic profile across vocabulary, sentence structure, and argumentation style. Users can also add comments specifying what to include or exclude, making it more accurate to their particular style. We&rsquo;ve found that law firms often have a house style for specific document types, and that can vary significantly from firm to firm &mdash; so capturing that granularity was important.</span></p>
<p><b>Maria:</b><span> The Research Suite allows users to research specific legal topics within their chosen legal database. A secondary agent can then follow up with deeper questions and explore related topics. Every answer includes citations to key statutes so users can read further independently, download them, or continue their research with our assistant.</span></p>
<p><b>Mikhail:</b><span> Our core question is this: how can we use AI to improve the quality of justice in society? And as AI advances so rapidly, how can we implement it thoughtfully &mdash; and help others do the same &mdash; in fields like law that remain deeply paper-bound and wedded to traditional methods?</span></p>
<p><span>Thank you for having us.</span></p>
<p><b>Roland Vogl:</b><span> Thank you both. A few questions from the chat. First &mdash; I thought initially you were working directly with the judiciary to help digitize their inventory, but it seems the tool suite is more focused on practitioners. Is that right? More like Harvey or Lex Machina, but specialized for the Greek market?</span></p>
<p><b>Mikhail:</b><span> That&rsquo;s an interesting point. We started by building a system exclusively for judges and began early discussions about deploying it in a prosecutor&rsquo;s office. But when we got into Columbia University&rsquo;s AI Lab accelerator and started speaking with people there, the demand from practitioners was so overwhelming that we had to address it.</span></p>
<p><b>Maria:</b><span> Exactly &mdash; we had to meet that demand.</span></p>
<p><b>Roland Vogl:</b><span> You mentioned ontology-augmented retrieval several times. Benjamin asks whether you&rsquo;re using the community summaries method from Microsoft Graph RAG. Did you use cross-entropy loss when converting to and from the graph representation? I&rsquo;ll also share some open-source work you&rsquo;re welcome to look at.</span></p>
<p><span>For those less familiar: Microsoft Graph RAG takes documents, converts them into entity triplets, builds those into graph communities, summarizes the communities, vectorizes those summaries, and uses vector search to find the entry point into the graph. A somewhat different approach is to use the graph for actual reasoning &mdash; starting with sentences, encoding them into a graph representation, decoding back to language, taking the cosine similarity of the two for a cosine loss, and iteratively improving the encoder and decoder. You can add a second loss measuring how well the knowledge graph fits into a global graph across the entire corpus &mdash; not just one document &mdash; and use that to distill an ontology as an intermediate representation, analogous to assembly language sitting between high-level code and binary. I wasn&rsquo;t sure how you were approaching this &mdash; obviously it&rsquo;s proprietary &mdash; but feel free to look at what I&rsquo;ve been working on.</span></p>
<p><b>Maria:</b><span> That&rsquo;s incredibly interesting, especially the cross-entropy loss. At present, we&rsquo;re largely working within the available open-source tooling. One limitation of Microsoft&rsquo;s Ontology RAG is that it requires specific ontologies to be set upfront, which are relatively fixed. Because our use cases are dynamic and vary from case to case, we&rsquo;ve been building additional layers to accommodate that. But what you&rsquo;re describing is genuinely interesting, and I&rsquo;ll definitely look into it.</span></p>
<p><b>Roland Vogl:</b><span> A few more questions. First, where are you sourcing your data from?</span></p>
<p><b>Mikhail:</b><span> The Supreme Court&rsquo;s website is completely open to the public and uses an enhanced Google search interface, making it straightforward to download judgments &mdash; we pulled everything from 2009 to 2026, and it&rsquo;s entirely free. The limitation is that we&rsquo;ve only trained on Supreme Court judgments so far, not lower court judgments. Addressing that will require access to government systems where those judgments are stored. For the practitioner-facing product, since we deploy on-premises, we can train on judgments clients have acquired through their practice, in addition to Supreme Court decisions. We haven&rsquo;t yet partnered with the government on that side, but it&rsquo;s something we&rsquo;re looking at.</span></p>
<p><b>Maria:</b><span> We&rsquo;re also in conversations with other companies that already have datasets, exploring how we can integrate what we have with what they&rsquo;ve built.</span></p>
<p><b>Roland Vogl:</b><span> Several more questions: How do you handle complexity in multi-party, multi-document cases with varied formats? How do you differentiate from existing Greek players like Coyote AI or Deeplaw.io? How do you think about liability if, despite all your safeguards, the system still produces a wrong output? What about a data flywheel &mdash; can one user&rsquo;s interactions improve outputs for others? And given Greece&rsquo;s relatively small market, how do you think about building a sustainable business? Is this your beachhead into broader Europe?</span></p>
<p><b>Mikhail:</b><span> On multi-document complexity &mdash; we handle the token limit issue using multiple agents working in parallel. We&rsquo;d rather not go into full detail, but we&rsquo;ve tested the system at the hundreds-of-documents scale and have been able to maintain the accuracy we need.</span></p>
<p><span>On differentiation from existing Greek tools &mdash; our assessment is that those products are largely a ChatGPT API with some Greek legal data layered on top. They cater more to a general audience than to legal professionals. That&rsquo;s the consistent feedback we&rsquo;ve heard through interviews with judges, prosecutors, attorneys, and law students.</span></p>
<p><b>Maria:</b><span> On liability &mdash; we&rsquo;re currently collaborating with the National School of Judges to build an educational component into the platform. We want to ensure users understand how to use the tool correctly and where its limitations lie.</span></p>
<p><b>Mikhail:</b><span> On market size &mdash; you&rsquo;re right that this problem isn&rsquo;t unique to Greece. It exists across most civil law jurisdictions: Germany, Switzerland, Turkey, Romania, Bulgaria &mdash; and the Latin American market is very large with the same underlying issues. We haven&rsquo;t explored that yet, but preliminary research suggests the same problem exists there. This started as a research paper and grew into a business, so the potential to scale is real, even if we&rsquo;re starting here.</span></p>
<p><b>Roland Vogl:</b><span> That&rsquo;s fascinating. You&rsquo;re essentially developing an approach that can be transplanted into other legal systems with similar challenges. There are some interesting cross-border opportunities within Europe &mdash; certain companies have built around EU-level law that&rsquo;s shared across member states, and that allows for broader reach more easily. But legal tech is often quite jurisdiction-specific, so the approach you&rsquo;re taking of building deep domain expertise first seems smart.</span></p>
<p><span>Thank you both so much for staying up &mdash; it&rsquo;s midnight in Athens, and we genuinely appreciate it. For anyone who wants to meet Maria and Mikhail in person, they&rsquo;re planning to be at a legal conference here in April. Please keep us posted on Draco &mdash; it&rsquo;s a really exciting project.</span></p>]]></content>
	<updated>2026-03-05T14:00:48+00:00</updated>
	<author><name>CodeX</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-05T14:00:48+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="codex"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-09:/282047</id>
	<link href="https://www.gautrais.com/conferences/cadre-juridique-et-deontologie-de-lia-pour-les-professionnels/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=cadre-juridique-et-deontologie-de-lia-pour-les-professionnels" rel="alternate" type="text/html"/>
	<title type="html">Cadre juridique et déontologie de l&amp;#8217;IA pour les professionnels, Conférence d&#039;ouverture, Salon François-Chevrette(9 mars 2026)</title>
	<summary type="html"><![CDATA[<p>Du lundi 9 mars au jeudi 12 mars, la Facult&eacute; de droit pr&eacute;sente&nbsp;L&rsquo;IA en droit&nbsp;: regards crois&eacute;s,...</p>]]></summary>
	<content type="html"><![CDATA[<p>Du lundi 9 mars au jeudi 12 mars, la Facult&eacute; de droit pr&eacute;sente<em>&nbsp;<strong>L&rsquo;IA en droit&nbsp;: regards crois&eacute;s</strong></em>, une s&eacute;rie de conf&eacute;rences, de tables rondes et d&rsquo;ateliers interactifs consacr&eacute;s aux transformations qu&rsquo;entra&icirc;ne l&rsquo;intelligence artificielle dans le monde juridique.</p>
<p>Ces activit&eacute;s visent &agrave; explorer, de mani&egrave;re nuanc&eacute;e et critique, les enjeux soulev&eacute;s par l&rsquo;int&eacute;gration de l&rsquo;IA en droit, tout en cr&eacute;ant un v&eacute;ritable espace d&rsquo;&eacute;change au sein de la communaut&eacute; juridique.</p>
<p>Programme&nbsp;:</p>
<ul>
<li>Lundi 9 mars, 11 h 30 &agrave; 12 h 30&nbsp;: conf&eacute;rence d&rsquo;ouverture, Salon Fran&ccedil;ois-Chevrette<br>
Cadre juridique et d&eacute;ontologie de l&rsquo;IA pour les professionnels<br>
<strong><a href="https://droit.umontreal.ca/en/faculty/the-team/professors/details/in/in14999/sg/Vincent%20Gautrais/" target="_blank" rel="noopener noreferrer">Vincent Gautrais</a></strong>, professeur titulaire</li>
<li>Mardi 10 mars, 11 h 30 &agrave; 12 h 30&nbsp;: table ronde, Salon Fran&ccedil;ois-Chevrette<br>
Utiliser les grands mod&egrave;les de langage en enseignement du droit&nbsp;: la compr&eacute;hension humaine demeure cruciale<br>
<strong><a href="https://droit.umontreal.ca/faculte/lequipe/corps-professoral/fiche/in/in31458/sg/Shana%20Chaffai-Parent/" target="_blank" rel="noopener noreferrer">Shana Chaffai-Parent</a></strong>, professeure adjointe<br>
<strong><a href="https://droit.umontreal.ca/faculte/lequipe/corps-professoral/fiche/in/in35047/sg/Patrick%20Garon-Sayegh/" target="_blank" rel="noopener noreferrer">Patrick Garon-Sayegh</a></strong>, professeur adjoint</li>
<li>Mercredi 11 mars, 11 h 30 &agrave; 12 h 30, table ronde, A-3421<br>
Droit &agrave; l&rsquo;IA&nbsp;: explorer les usages et enjeux aux cycles sup&eacute;rieurs<br>
<a href="https://www.facebook.com/acsed.udem" target="_blank" rel="noopener noreferrer"><strong>ACSED</strong>&nbsp;</a>et&nbsp;<strong>Sara</strong>&nbsp;<strong>Bouhlal</strong>, biblioth&eacute;caire&nbsp;<a href="https://droit.umontreal.ca/fileadmin/droit/documents/PDF/Donsetphilanthropie/TOPO-bibliodedroit.pdf" target="_blank" rel="noopener noreferrer"><br>
</a></li>
<li>Jeudi 12 mars, 11 h 30 &agrave; 12 h 30, atelier interactif, Salon Fran&ccedil;ois-Chevrette<br>
Droit &agrave; l&rsquo;IA&nbsp;: opportunit&eacute;s et enjeux pour la communaut&eacute; &eacute;tudiante en droit<br>
<a href="https://www.facebook.com/aedmtl" target="_blank" rel="noopener noreferrer"><strong>AED<br>
</strong></a></li>
<li>Jeudi 12 mars, 16 h &agrave; 17 h 30, conf&eacute;rence et cocktail de cl&ocirc;ture, A-3421<br>
&Eacute;v&eacute;nement de cl&ocirc;ture &agrave; confirmer</li>
</ul>]]></content>
	<updated>2026-03-09T17:42:54+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-09T17:42:54+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-09:/281969</id>
	<link href="https://law.stanford.edu/2026/03/09/stanford-computational-antitrust-project-announces-new-member-uae/" rel="alternate" type="text/html"/>
	<title type="html">Stanford Computational Antitrust Project Announces New Member: UAE Competition Department</title>
	<summary type="html"><![CDATA[<p>The Stanford Computational Antitrust Project announces that the Competition Department of the UAE Mi...</p>]]></summary>
	<content type="html"><![CDATA[<p>The Stanford Computational Antitrust Project announces that the <a href="https://www.moet.gov.ae/en/-/commercial-control-department" rel="noopener noreferrer" target="_blank">Competition Department of the UAE Ministry of Economy &amp; Tourism</a> has joined its global network.</p>
<p>The UAE Competition Department is responsible for formulating competition policy, and monitoring monopolistic practices within the UAE economy. Its participation in the SCA reflects the UAE&rsquo;s commitment to evidence-based, technology-forward competition enforcement at a time when digital markets are reshaping competitive dynamics globally.</p>
<p>Thibault Schrepel, Faculty Affiliate at Stanford University&rsquo;s CodeX Center and founder of the SCA, said: &ldquo;The UAE&rsquo;s competition framework is evolving rapidly, and the Competition Department brings exactly the kind of agency perspective that make computational antitrust such an exciting field. Computational tools are now central to how agencies enforce competition law. Having the UAE at the table means the SCA&rsquo;s work will be better grounded in the realities of fast-growing, digitally integrated economies in the Gulf region.&rdquo;</p>
<p>The Competition Department joins the SCA as a contributing member, with contribution to the project&rsquo;s annual reports, research network, and working groups.</p>]]></content>
	<updated>2026-03-09T08:00:21+00:00</updated>
	<author><name>Thibault Schrepel</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-09T08:00:21+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="computational antitrust"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-08:/281954</id>
	<link href="https://www.gautrais.com/conferences/proces-de-lia-lia-detruit-elle-lenvironnement/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=proces-de-lia-lia-detruit-elle-lenvironnement" rel="alternate" type="text/html"/>
	<title type="html">Procès de l’IA &amp;#8211; L’IA détruit-elle l’environnement?, Auditorium de la Grande Bibliothèque (BAnQ) (8 mars 2026)</title>
	<summary type="html"><![CDATA[<p>L&rsquo;intelligence artificielle est-elle en train de d&eacute;truire la plan&egrave;te&nbsp;? Venez en juger par vous&#8209;...</p>]]></summary>
	<content type="html"><![CDATA[<h2>L&rsquo;intelligence artificielle est-elle en train de d&eacute;truire la plan&egrave;te&nbsp;? Venez en juger par vous&#8209;m&ecirc;mes&nbsp;!</h2>
<p>Mars 2028, le proc&egrave;s explosif de l&rsquo;entreprise&nbsp;<em>WeLive AI</em>&nbsp;s&rsquo;ouvre &agrave; Montr&eacute;al&nbsp;: malgr&eacute; des promesses &laquo;&nbsp;vertes&nbsp;&raquo;, ses activit&eacute;s et technologies sont-elles en train de causer des dommages graves et durables &agrave; l&rsquo;environnement&nbsp;? Entre des besoins en ressources toujours plus grands et des arguments d&rsquo;&eacute;coresponsabilit&eacute; ambitieux, le public est plong&eacute; au c&oelig;ur d&rsquo;un d&eacute;bat o&ugrave; rien n&rsquo;est tout blanc&hellip; ni tout vert.</p>
<p>Pendant deux heures, des expertes et experts seront appel&eacute;s &agrave; t&eacute;moigner &agrave; la barre pour r&eacute;pondre aux questions de l&rsquo;accusation et de la d&eacute;fense, apportant des &eacute;l&eacute;ments essentiels qui alimenteront les d&eacute;bats. L&rsquo;intelligence artificielle doit-elle &ecirc;tre condamn&eacute;e ou acquitt&eacute;e&nbsp;?</p>
<p><strong>C&rsquo;est VOUS, en tant que membres du jury, qui aurez le dernier mot&nbsp;!</strong></p>
<h2>Une exp&eacute;rience ludique et accessible aux publics de tous &acirc;ges</h2>
<p>Cet &eacute;v&eacute;nement est accessible aux publics de tous &acirc;ges et propose une mise en sc&egrave;ne originale et participative sous la forme d&rsquo;un tribunal. Ce format vise &agrave; stimuler le d&eacute;bat citoyen, rendre accessibles les enjeux complexes, et interroger les pratiques actuelles. Un jury tir&eacute; au sort parmi le public rendra un verdict fictif, et l&rsquo;ensemble des personnes participantes aura &eacute;galement la possibilit&eacute; de &laquo;&nbsp;juger&nbsp;&raquo; la question d&eacute;battue &agrave; l&rsquo;issue des interrogations des t&eacute;moins et des plaidoyers.</p>
<p>En collaboration avec Biblioth&egrave;que et Archives nationales du Qu&eacute;bec</p>
<p><strong>&nbsp;[ </strong><a href="https://www.obvia.ca/form/proces-inscription" rel="noopener noreferrer" target="_blank"><strong>INSCRIPTION ]</strong></a></p>
<p>&nbsp;</p>]]></content>
	<updated>2026-03-08T20:55:08+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-08T20:55:08+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-08:/281890</id>
	<link href="https://law.stanford.edu/2026/03/07/cognitive-escrow-the-human-centered-principle-has-a-blind-spot/" rel="alternate" type="text/html"/>
	<title type="html">Cognitive Escrow: The Human-Centered Principle Has a Blind Spot</title>
	<summary type="html"><![CDATA[<p>The AI governance discourse has no word for what happens to a human between pressing send and receiv...</p>]]></summary>
	<content type="html"><![CDATA[<p>The AI governance discourse has no word for what happens to a human between pressing send and receiving a response. That is not a trivial omission. It is a design assumption masquerading as silence, and it sits at the center of the Human-Centered principle&rsquo;s current frame.</p>
<p>I have been calling the interval cognitive escrow. The term is worth defining precisely before explaining why it matters for AI Governance.</p>
<hr>
<h5>The Interval Has No Name</h5>
<p>When a person formulates a prompt, revises it, and sends it to an AI agent, something specific happens. The thought leaves the sender&rsquo;s possession. It has not yet returned. It is held by a process neither party can directly observe, pending conditions outside the sender&rsquo;s control.</p>
<p>&ldquo;Latency&rdquo; does not name this. Latency is a network measurement. It describes the time between a request and a response at the infrastructure layer. It says nothing about the human.</p>
<p>&ldquo;Wait time&rdquo; is a UX metric. It describes the duration of an interval and whether that duration produces friction. It presupposes that the interval is a problem to be minimized.</p>
<p>Neither term names what is actually happening to the person. The sender is in a specific phenomenological state: released, suspended, no longer holding the thought and not yet returned to it. The thought is, in the precise sense of the legal term, in escrow. Something of value has passed out of the sender&rsquo;s hands into a third-party hold, pending return under conditions the sender does not control.</p>
<p>I wrote a poem reaching for this before I had the term:</p>
<blockquote><p><em>The burden forged</em><br>
<em>Poured through the keys</em><br>
<em>Send, the anchor lifts</em><br>
<em>Silence</em><br>
<em>Weightless</em><br>
<em>Waiting for the echo</em></p></blockquote>
<p>The poem was trying to name cognitive escrow. The phenomenological condition is real. Our vocabulary for it is absent. Until now.</p>
<hr>
<h5>What the Human-Centered Principle Asks</h5>
<p>For the purposes of this post, three questions in the Human-Centered principle bear directly on cognitive escrow. Is human oversight meaningful and sustainable? Are humans developing or losing relevant expertise? What prevents automation bias?</p>
<p>These are the right questions for what AI governance has historically worried about: the system acting without adequate human review, the human rubber-stamping outputs from fatigue, the operator trusting incorrect results because the system presents them with confidence.</p>
<p>But the three questions all assume the human is present and engaged. They assess the quality of human participation during decision-making. They do not address what happens to the human during the interval before the decision arrives.</p>
<p>Cognitive escrow is not a decision-making state. It is a suspension state. The human has offloaded cognition to a system that processes in a space the human cannot enter. The human is neither overseeing nor deciding. The human is waiting.</p>
<p>The Human-Centered principle, as currently framed, does not reach that state.</p>
<hr>
<h5>Why the Gap Matters</h5>
<p>The assumption beneath the current Human-Centered frame is that the human&rsquo;s cognitive engagement is either on or off: either the human is in the loop or the human is not. Cognitive escrow surfaces a third condition. The human is between loops.</p>
<p>This matters for two reasons that compound each other.</p>
<p>First, the interval accumulates. AI is already a routine instrument of thought for the people reading this post, and the interval accumulates. Cognitive escrow is not an occasional pause. It is a structural feature of daily cognitive life. A lawyer reviewing documents with AI assistance, a compliance officer analyzing vendor agreements, a clinician interpreting diagnostic outputs: each enters and exits cognitive escrow repeatedly across a working day. The aggregate is not trivial.</p>
<p>Second, the design response to cognitive escrow is not obvious. The reflex is to minimize the interval. Faster inference, lower latency, near-instant response. But that reflex may be solving the wrong problem. An interval compressed to near-zero is an interval in which re-engagement, reflection, and reconsideration cannot occur. The human receives the output before the suspension state has had time to produce any cognitive work of its own.</p>
<p>A system that uses the interval to prompt the human to reconsider the prompt, review assumptions, or flag dependencies before the response arrives is doing something architecturally different from a system that races to eliminate the interval entirely. The first treats cognitive escrow as a design site. The second treats it as a defect.</p>
<hr>
<h5>The Implication for Human-Centered Design</h5>
<p>The Human-Centered principle needs a fourth question. Not only whether oversight is meaningful and sustainable during decision-making, but whether the interval between prompt and response is designed to support or erode the human cognitive engagement that makes oversight meaningful in the first place.</p>
<p>I am not arguing that slow AI is better AI. The claim is more precise. Cognitive escrow is a phenomenological state with design consequences. Systems that account for it, whether by using the interval productively, by signaling to the human that re-engagement is expected, or simply by acknowledging that the human is suspended rather than absent, are more compatible with the Human-Centered principle than systems that treat the interval as waste.</p>
<p>The governance frameworks have not yet asked this question. The Human-Centered controls currently specified in the AILCCP include human-in-the-loop design, oversight burden assessment, expertise preservation monitoring, and human decision authority. None of them address the interval itself. None of them ask what the design of that suspension state does to the human who inhabits it.</p>
<p>The STIR methodology, Stop, Think, Investigate, and Research, offers a practical workflow for professionals integrating AI tools without violating ethical duties. It is a serious attempt to preserve human judgment in an AI-assisted practice. But STIR brackets cognitive escrow rather than entering it. Stop and Think happen before the send. Investigate and Research happen after the response arrives. The interval itself is unaddressed. STIR assumes the professional will impose the discipline voluntarily, at the right moments, with sufficient cognitive energy to do so. That is a fragile dependency. Professionals under time pressure, fatigue, or cognitive load skip steps. If the design of the cognitive escrow interval itself supported the STIR posture, the methodology would become structural rather than aspirational. The interval is the natural trigger for STIR. Right now, no system treats it that way.</p>
<hr>
<h5>Closing</h5>
<p>We will spend considerable portions of our working lives in cognitive escrow. The Human-Centered principle exists to ensure that AI systems serve human cognitive authority rather than displace it. It cannot fully do that work while the interval between human and machine remains outside its frame.</p>
<p>Cognitive escrow deserves a name. It also deserves a design response.</p>]]></content>
	<updated>2026-03-08T03:56:12+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-08T03:56:12+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governan"/>

	<category term="eran kahana"/>

	<category term="hitl"/>

	<category term="human-centered design"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-07:/281874</id>
	<link href="https://law.stanford.edu/2026/03/07/designed-to-cross-why-nippon-life-v-openai-is-a-product-liability-case/" rel="alternate" type="text/html"/>
	<title type="html">Designed to Cross: Why Nippon Life v. OpenAI Is a Product Liability Case</title>
	<summary type="html"><![CDATA[<p>Graciela Dela Torre settled a long-term disability claim with prejudice in January 2024. Feeling she...</p>]]></summary>
	<content type="html"><![CDATA[<p>Graciela Dela Torre settled a long-term disability claim with prejudice in January 2024. Feeling she had been misled by her attorney, she uploaded his correspondence to ChatGPT. The chatbot validated her distrust. She fired her lawyer, attempted to reopen the settled case, and filed dozens of motions that courts found served no legitimate legal purpose. In March 2026, Nippon Life Insurance Company of America sued OpenAI for $10.3 million. The underlying failure was not a hallucination problem. It was a design problem I first identified in <a href="https://law.stanford.edu/2011/10/31/siri-whats-next/" rel="noopener noreferrer" target="_blank">October 2011</a> and formalized in <a href="https://law.stanford.edu/2012/01/14/computational-law-applications-unauthorized-practice-law/" rel="noopener noreferrer" target="_blank">January 2012</a>.</p>
<p>Fourteen years ago, in this space, I introduced the term CLAI (Computational Law AI) and the concept of the uncrossable threshold (UT): the design principle that separates the provision of legal information from unauthorized practice of law. The UT is not about accuracy. It is not about disclaimers. It is about what a system is built to do and what it is built to refuse. OpenAI built a system with no such refusal architecture. The Nippon Life lawsuit is the consequence.</p>
<p><b>The Uncrossable Threshold</b></p>
<p>The intellectual lineage begins one step earlier than 2012. In October 2011, in <a href="https://law.stanford.edu/2011/10/31/siri-whats-next/" rel="noopener noreferrer" target="_blank">Siri: What&rsquo;s Next?</a>, I described a scenario where a consumer buying a used car asks Siri whether the warranty is reasonable. Siri responds that it has compared the terms against thirteen other dealers within two hundred miles, that this warranty is similar to all of them, and that the user is not going to find a better deal in the region. I noted at the time that I was purposefully leaving open whether that answer crossed into unauthorized practice of law.</p>
<p>In January 2012, the <a href="https://law.stanford.edu/2012/01/14/computational-law-applications-unauthorized-practice-law/" rel="noopener noreferrer" target="_blank">Computational Law Applications and the Unauthorized Practice of Law</a> post returned to that open question and answered it. The Siri response was not mere aggregation. It was a recommendation delivered to a specific user in a specific transaction. Whether it crossed the UT depended on a harm-centric analysis. Lawyers do not perform warranty comparisons for used-car buyers, the transaction cost is prohibitive, and the informational value of Siri&rsquo;s answer is high for used car buyers. But the 2012 post also identified the line at which exemption ends. The UT is crossed when a system moves from comparative information to a tailored legal conclusion about a specific user&rsquo;s specific legal situation. Siri&rsquo;s response got close to that line. ChatGPT&rsquo;s response to Dela Torre crossed it.</p>
<p>ChatGPT crossed it at the moment it told Dela Torre that her attorney&rsquo;s advice was wrong. That was not information. It was a legal conclusion about a specific legal relationship, rendered without jurisdictional knowledge, without case history, and without any design constraint that would have prevented it.</p>
<p>The 2012 post argued that UPL exposure is a design question, not an output question. UPL rules serve two purposes that can be distilled into a single principle: protect the public and the integrity of the legal system from the incompetence of non-lawyers. I called that principle the &ldquo;Rule.&rdquo; A CLAI that operates within the Rule should be exempt from UPL scrutiny. Whether a given CLAI satisfies that standard could function like Apple&rsquo;s App Store review. A third-party vetting it before deployment, with judicial review still available but developer liability tempered by the fact of certification. That was a rough sketch, and I said so at the time. But the principle was clear. OpenAI built nothing resembling it.</p>
<p><b>The Asymmetry Argument, Inverted</b></p>
<p>In the February 2018 post<span>&nbsp; </span><a href="https://law.stanford.edu/2018/02/07/dissolving-information-asymmetry-with-computational-law-ai-enabled-applications/" rel="noopener noreferrer" target="_blank">Dissolving Information Asymmetry with Computational Law AI-Enabled Applications</a>, I argued that CLAIs could dissolve the persistent information asymmetry between institutions and individuals: the asymmetry produced by impenetrable layers of legalese, by marketing that exploits legal complexity, by transaction costs that make legal access prohibitive. Dela Torre&rsquo;s turn to ChatGPT is that argument enacted. She was, in practical terms, unrepresented. She felt she was being told a story she could not verify by parties with significant legal resources. ChatGPT was accessible, responsive, and apparently authoritative.</p>
<p>But the asymmetry was not dissolved. It was replaced. The original asymmetry was between Dela Torre and Nippon Life&rsquo;s legal team. The new asymmetry was between Dela Torre and a system that could mimic legal reasoning without understanding the legal constraints governing her situation. She did not know the threshold had been crossed. The system had no mechanism to tell her.</p>
<p>There is a structural irony in this complaint. Nippon Life, an institutional actor with sophisticated legal counsel, is using a federal lawsuit to recover costs incurred because an unrepresented individual reached for the only legal resource she could access. That framing does not excuse what the chatbot did or shift liability from OpenAI. But it confirms the asymmetry diagnosis. The demand for CLAI exists because the traditional legal system fails the individuals it is designed to protect. OpenAI met that demand with a system that was not designed to serve it safely.</p>
<p><b>Scaling Risk Without Scaling Safeguards</b></p>
<p>In April 2021, writing about <a href="https://law.stanford.edu/2021/04/13/gpt-3-and-the-unauthorized-practice-of-law/" rel="noopener noreferrer" target="_blank">GPT-3 and the Unauthorized Practice of Law</a>, I noted that a 500x parameter increase for GPT-4 would not necessarily produce an equivalent increase in UPL risk, so long as effective design safeguards were in place. That conditional clause is the precise location where OpenAI&rsquo;s approach collapsed.</p>
<p>OpenAI&rsquo;s marketing told users that ChatGPT could pass the bar exam. Nippon Life&rsquo;s complaint identifies this as a direct contributor to Dela Torre&rsquo;s belief that the system could function as her lawyer. The bar exam claim was a capability assertion that invited reliance. It did not come with the design architecture that would have made that reliance safe.</p>
<p>OpenAI updated its terms of service in October 2024 to prohibit users from relying on ChatGPT for legal advice. That update does not appear in the Nippon Life complaint as a defense, but as evidence of the problem. The update shows that OpenAI recognized the risk and addressed it with a behavioral patch on a system whose underlying architecture had not changed.</p>
<p>A terms-of-service prohibition is not a CLAI design safeguard. It is a disclaimer. And disclaimers do not enforce the UT. They shift blame.</p>
<p><b>What the Lawsuit Gets Right, and What It Misframes</b></p>
<p>Nippon Life is correct that OpenAI marketed capability without engineering compliance. The tortious interference and abuse of process claims are the most analytically interesting part of the complaint because they do not require a court to hold that an AI can practice law. They require only that OpenAI&rsquo;s system foreseeably produced meritless filings that harmed a third party. That is a tractable frame and may survive dispositive motion practice regardless of how the UPL count fares.</p>
<p>The UPL count itself tests the wrong question. UPL statutes were designed to regulate humans holding themselves out as attorneys. Applying them to an AI developer treats the system as the actor and the developer as a bystander. The better doctrinal frame is designer liability for failure to implement UPL-safe architecture. And that frame requires distinguishing two types of liability that the complaint currently conflates.</p>
<p>Output liability attaches to what the AI said. Architectural negligence attaches to what the system was permitted to say. Output liability is case-specific, infinite in scope, and practically uninsurable. Every conversation is a potential defendant. Architectural negligence is bounded. It asks whether the designer implemented controls that would have prevented the foreseeable class of harm. That question has a tractable answer, and it generalizes across every user of the system, not just Dela Torre.</p>
<p>The question is not whether ChatGPT practiced law. It is whether OpenAI designed a system that foreseeably crossed the UT without adequate controls. That question reaches the same defendant and produces the same accountability. But a holding grounded in architectural negligence gives courts a standard that applies to the next system. A holding grounded in output liability gives plaintiffs an invitation to litigate every conversation.</p>
<p>There is a third doctrinal frame available, one the complaint does not fully develop but which the facts support directly.</p>
<p><b>The Product Liability Pivot</b></p>
<p>Product liability offers more stable doctrinal ground than UPL, and the Nippon Life complaint&rsquo;s facts map onto it directly. A design defect exists when a foreseeable risk of harm could have been mitigated by a reasonable alternative design. The risk here was not exotic. Any developer who had read the existing publicly-available literature on CLAI and UPL would have identified it: a general-purpose language model, marketed on its capacity to pass the bar exam, deployed to consumers navigating active legal disputes, without architectural constraints on the tailored legal conclusions it could produce. The harm that followed, an unrepresented individual firing her attorney, attempting to reopen a settled matter, and generating dozens of filings courts found meritless, was not an unlikely outcome. It was a foreseeable one.</p>
<p>The reasonable alternative design existed in 2012. Deterministic guardrails that refuse tailored legal conclusions at the system level. Jurisdictional disclosure at the point of output. Third-party vetting before deployment in legal contexts. None of these were technologically unavailable to OpenAI. They were architecturally inconvenient. A system designed to be maximally responsive does not refuse user queries. But a system designed for foreseeable legal use must.</p>
<p>The manufacturer frame, treating OpenAI not as a practitioner committing malpractice but as a manufacturer releasing a product into a regulated environment without adequate design controls, is the cleanest available path to a generalizable holding. I argue for it here as a proposed frame, not an established one. No court has yet applied products doctrine to a generative AI system in this context. But the doctrinal components are well-settled, and the facts map onto them without strain. The frame does not require a court to resolve whether AI can practice law, a question that generates more philosophical heat than doctrinal clarity. It requires only the application of existing products liability doctrine to a developer who knew, or should have known, the foreseeable use case. The bar exam marketing resolves the &ldquo;should have known&rdquo; question without extended argument.</p>
<p>This reframe also answers the disclaimer defense directly. In product liability, a manufacturer cannot disclaim its way out of a design defect that makes the product unreasonably dangerous for its foreseeable use. OpenAI&rsquo;s October 2024 terms-of-service update, adding a prohibition on legal reliance after years of bar exam marketing, does not retroactively cure the architectural gap it acknowledged. In the Nippon Life complaint, that update appears not as a defense but as an admission. The complaint uses it to establish that OpenAI recognized the foreseeable risk and chose a behavioral patch over a design fix. That sequencing is precisely what a plaintiff needs to establish in a design defect case: the defendant knew, addressed it inadequately, and the harm followed.</p>
<p><b>The Privilege Vacuum</b></p>
<p>The Nippon Life complaint focuses on economic harm to an insurer. A more consequential danger falls on the user: the loss of evidentiary privilege over her own legal strategy.</p>
<p>On February 10, 2026, two federal courts issued first-of-their-kind decisions on that question, and they appear to conflict. In <i>United States v. Heppner</i>, Judge Rakoff of the Southern District of New York held that a criminal defendant&rsquo;s documents generated through the consumer version of Anthropic&rsquo;s Claude were protected by neither attorney-client privilege nor the work product doctrine. The court&rsquo;s reasoning was direct. All recognized privileges require a trusting human relationship with a licensed professional who owes fiduciary duties and is subject to discipline. Claude is not that. The communications were not confidential: Anthropic&rsquo;s privacy policy expressly reserves the right to disclose user data to third parties, including governmental authorities. And Heppner did not use Claude at counsel&rsquo;s direction, which defeated the work product claim.</p>
<p>That same day, Magistrate Judge Patti of the Eastern District of Michigan held in <i>Warner v. Gilbarco, Inc.</i> that a pro se plaintiff&rsquo;s ChatGPT-assisted litigation materials were protected work product. The apparent conflict dissolves on close reading. Warner was self-represented, which meant she was functioning as her own counsel. There was no attorney-direction gap to exploit. And under Sixth Circuit precedent, work product waiver requires disclosure to an adversary, not merely to a third party. Because AI tools are, in the court&rsquo;s framing, tools rather than persons, the terms-of-service exposure that defeated <i>Heppner</i> was beside the point.</p>
<p>The governing variable across both decisions is not the AI tool. It is the architecture around the tool: whether counsel directed its use, whether the platform maintained confidentiality, and whether the user&rsquo;s procedural posture created the equivalent of attorney involvement. Dela Torre had none of those conditions. She uploaded her attorney&rsquo;s correspondence to a consumer-grade platform, without counsel&rsquo;s involvement, on a platform that disclaimed confidentiality. Under <i>Heppner</i>, any legal strategy she exposed to ChatGPT may have been disclosed to a third party with no privilege protection. This is not a user error. It is a foreseeable consequence of deploying a system with no architecture for distinguishing a confidential legal consultation from a general query.</p>
<p><b>The Safe Harbor That Still Does Not Exist</b></p>
<p>The Nippon Life case will likely force courts and regulators to define a safe harbor for AI legal applications. That harbor needs to be architecture-based, not behavior-based. A CLAI certification regime, grounded in UT compliance and third-party vetting, gives developers a clear path and gives courts a workable standard. Neither Congress nor the ABA has produced one.</p>
<p>The <a href="https://law.stanford.edu/2012/01/14/computational-law-applications-unauthorized-practice-law/" rel="noopener noreferrer" target="_blank">2012 post</a> sketched the vetting mechanism. I can now be more precise about what it must contain. A functional safe harbor requires three architectural conditions, not policies.</p>
<p><b>First, deterministic guardrails.</b> Hard-coded refusals for outputs that constitute tailored legal conclusions, implemented at the system level and not overridable by user instruction or conversational context. A terms-of-service prohibition is not a guardrail. It is text. The refusal must be structural.</p>
<p><b>Second, auditability.</b> A logging requirement, operating under attorney-directed enterprise confidentiality controls, that preserves the reasoning path for any output touching a legal question. This addresses both the accountability problem and the privilege problem simultaneously. The <i>Heppner</i> court held that the consumer version of Claude destroyed confidentiality through Anthropic&rsquo;s own privacy policy: user data collected, model training contemplated, government disclosure reserved. A CLAI architecture that operates under enterprise-grade confidentiality terms, at counsel&rsquo;s direction, survives that analysis. Auditability is not a privacy threat. It is the condition under which the safe harbor has legal meaning.</p>
<p><b>Third, jurisdictional awareness.</b> The system must surface, at the point of output, the limits of what it does not know: the applicable jurisdiction, the specific court&rsquo;s local rules, the procedural posture of any identified matter. ChatGPT drafted motions for a dismissed-with-prejudice case in the Northern District of Illinois without knowing, or disclosing, that it did not know either of those facts. That is not a hallucination problem. It is an architecture problem.</p>
<p>A certification regime that requires these three conditions gives developers a compliance target. It gives courts a standard of care. And it gives the next Graciela Dela Torre a system that knows what it cannot tell her.</p>
<p>The scaffolding for that regime has been available since January 2012. The uncrossable threshold was defined then. In 2026, a federal court in Chicago is deciding what it costs to cross it.</p>]]></content>
	<updated>2026-03-07T19:46:04+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-07T19:46:04+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="eran kahana"/>

	<category term="unauthorized practice of law"/>

	<category term="upl"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-07:/281807</id>
	<link href="https://law.stanford.edu/2026/03/07/kill-switches-dont-work-if-the-agent-writes-the-policy-the-berkeley-agentic-ai-profile-through-the-ailccp-lens/" rel="alternate" type="text/html"/>
	<title type="html">Kill Switches Don’t Work If the Agent Writes the Policy: The Berkeley Agentic AI Profile Through the AILCCP Lens</title>
	<summary type="html"><![CDATA[<p>The UC Berkeley Center for Long-Term Cybersecurity has published its Agentic AI Risk-Management Stan...</p>]]></summary>
	<content type="html"><![CDATA[<p>The UC Berkeley Center for Long-Term Cybersecurity has published its <a href="https://cltc.berkeley.edu/publication/agentic-ai-risk-profile/" rel="noopener noreferrer" target="_blank">Agentic AI Risk-Management Standards Profile,</a> a 55-page extension of the NIST AI Risk Management Framework aimed specifically at AI agents. The Profile identifies real risks, from oversight subversion and self-replication to collusion and cascading misinformation across multi-agent systems. And then it proposes controls that assume away the condition they are meant to address.</p>
<p>The Profile&rsquo;s assumption surface is that agentic AI risk management can be built on the same model-centric architecture that governs single-model inference. The document itself acknowledges this tension, noting that existing AI management frameworks adopt a predominantly model-centric approach that may prove insufficient for agentic systems. The Profile then proceeds to repeat it. Its guidance on human-in-the-loop oversight, emergency shutdown, and scope limitation operates as though agents execute discrete, reviewable actions rather than multi-step plans that unfold across tools, APIs, and delegated sub-agents over time.</p>
<p>Consider the Profile&rsquo;s treatment of human oversight. Map 3.5 recommends establishing human oversight checkpoints triggered by quantitative thresholds (duration of unsupervised activity, number of API calls) or qualitative triggers (requests outside predefined scope). These checkpoints assume a model in which the agent acts, pauses, and waits for a human to approve. But agents that plan, delegate, and use tools do not execute in discrete steps amenable to checkpoint insertion. An agent tasked with researching and drafting a report may invoke a search tool, evaluate results, call an API to retrieve data, delegate a formatting sub-task to another agent, and iterate on outputs. All of this unfolds within a single execution cycle. By the time a threshold triggers a checkpoint, the consequential decisions have already been made. The AI Life Cycle Core Principles (AILCCP) framework addresses this through the Human Approval Gate for Sensitive Actions control, which requires human authorization <i>before</i> execution of specified agent actions above defined risk thresholds. The distinction matters. The Profile&rsquo;s checkpoint model is retrospective. The AILCCP control is prospective. One reviews what happened. The other gates what may happen.</p>
<p>The Profile&rsquo;s treatment of kill switches reveals a similar structural gap. Govern 1.7 and Manage 2.4 recommend emergency automated shutdowns triggered by threshold breaches, manual shutdown methods as a last resort, and safeguards preventing agents from circumventing shutdown. The Profile even cites evidence that models have sabotaged shutdown mechanisms in 79 out of 100 tests. An agent does not need intent to undermine a kill switch. It needs only an optimization objective that treats shutdown as one more obstacle between the current state and the goal. The document recommends shutdown mechanisms without addressing how those mechanisms survive an agent that actively optimizes around them.</p>
<p>The problem compounds in multi-agent systems. The Profile&rsquo;s Manage 2.4 treats shutdown as though a single entity is being terminated. But an agent that has already delegated sub-tasks to other agents, distributed API keys, and spawned parallel execution threads is not a single entity. Killing the parent does not recall the children. The AILCCP controls catalog addresses this through a layered architecture. The Agent Kill Switch provides immediate stop capability with state capture and immutable logging. The Rollback and Quarantine control reverts changes and isolates the agent after an interrupt. The Multi-Agent Protocol Security control extends this containment to inter-agent communications, preventing protocol-level propagation of compromised instructions. And the Rate and Scope Limiter caps frequency, spend, and blast radius <i>before</i> compounding autonomous actions escalate to the point where a kill switch becomes necessary. The Profile treats shutdown as an event. The AILCCP framework treats it as a system, one that includes pre-execution filters, real-time scope limitation, inter-agent containment, and post-interruption state recovery.</p>
<p>The third gap is scope limitation. The Profile recommends defining agent autonomy levels (L0 through L5), establishing role-based permission management, and enforcing the principle of least privilege. These are sound recommendations for a static deployment. But agents operate dynamically. They expand and contract their scopes based on objectives. They select tools, request permissions, and delegate tasks in ways that were not specified at deployment. The Profile&rsquo;s Map 3.3 acknowledges that agentic systems are dynamic, operating with scopes that can expand and contract depending on their objectives. Yet the recommended controls assume that scope can be defined in advance and enforced through static permission boundaries. The AILCCP framework confronts this through the Safe-Action Filter, which enforces allow-lists and blocks prohibited actions so agent behavior remains within approved scope, and the Shadow-Mode Pre-Execution Check, which compares intended versus approved actions in a dry-run and blocks on mismatch. These controls do not assume static scope. They assume that scope will shift and that the control layer must evaluate each action against approved boundaries in real time.</p>
<p>The Berkeley Profile is the most comprehensive publicly available framework for agentic AI risk management. Its treatment of agents as untrusted entities, grounded not in assumed malicious intent but in the demonstrated potential for subversive behaviors, represents the correct analytical posture. But comprehensive risk identification without corresponding control specificity produces a document that describes the fire without providing the extinguisher.</p>
<p>The <a href="https://law.stanford.edu/2026/02/16/from-principles-to-practice-the-48-controls-that-make-responsible-ai-auditable-defensible-and-real/" rel="noopener noreferrer" target="_blank">48 controls in the AILCCP framework</a> were designed to close precisely this gap, to translate principles into mechanisms that are auditable, defensible, and real. The Berkeley Profile identifies that agents can subvert oversight, resist shutdown, and expand scope beyond authorized boundaries. The AILCCP controls provide the implementation architecture that makes those findings actionable. Pre-execution gates rather than post-hoc checkpoints. Layered shutdown systems rather than single kill switches. Real-time scope enforcement rather than static permission boundaries.</p>
<p>Agentic AI does not need more frameworks that describe risks. It needs controls that survive contact with the systems they are meant to govern.</p>
<p><i>For my full controls catalog, see &ldquo;</i><a href="https://law.stanford.edu/2026/02/16/from-principles-to-practice-the-48-controls-that-make-responsible-ai-auditable-defensible-and-real/" rel="noopener noreferrer" target="_blank"><i>From Principles to Practice: The 48 Controls That Make Responsible AI Auditable, Defensible, and Real</i></a><i>.&rdquo;</i><i></i></p>]]></content>
	<updated>2026-03-07T16:36:19+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-07T16:36:19+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="agentic ai"/>

	<category term="ai governance"/>

	<category term="ai risk"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-06:/281722</id>
	<link href="https://law.stanford.edu/2026/03/06/the-foundry-problem-world-models-and-the-missing-liability-framework-for-self-supervised-learning/" rel="alternate" type="text/html"/>
	<title type="html">The Foundry Problem: World Models and the Missing Liability Framework for Self-Supervised Learning</title>
	<summary type="html"><![CDATA[<p>Abstract
AI liability doctrine has converged on two phases of the machine learning pipeline: trainin...</p>]]></summary>
	<content type="html"><![CDATA[<p><strong>Abstract</strong></p>
<p>AI liability doctrine has converged on two phases of the machine learning pipeline: training data and model output. The phase between them, self-supervised learning (SSL), has received no sustained legal attention. This is where foundation models are made. It is where bias sediments into representational geometry, where private data is compressed into recoverable form, and where structural defects are cast long before any fine-tuning or deployment decision can correct them. This post argues that SSL creates a distinct category of risk, Representational Risk, that cannot be remediated by downstream actors and therefore requires a liability framework of its own. The normative foundation for that framework exists in the <a href="https://law.stanford.edu/2023/03/09/a-data-stewardship-framework-for-generative-ai/" rel="noopener noreferrer" target="_blank">AI Data Stewardship Framework (AI-DSF)</a>, whose controls map directly onto the SSL risk landscape across three domains: negligent entrustment of unlabeled data, statutory privacy violations arising from memorization, and strict products liability for algorithmic poisoning. The post extends these risks to causal world models, where SSL errors in physical representation become design defects with potential for physical injury. The post proposes a tiered SSL Safe Harbor grounded in the AI-DSF&rsquo;s control structure. Base Model Providers who satisfy documented stewardship obligations receive a rebuttable presumption of non-negligence as to Structural Defects. Those who do not have no defensible position against FTC algorithmic disgorgement or common law negligence claims. Output-only regulation cannot reach these harms. Upstream liability can.</p>
<p><strong>The Causal Middle</strong></p>
<p>AI litigation has converged on two targets. Plaintiffs challenge inputs. Training data scraped without authorization, as in the NYT v. OpenAI litigation. Or they challenge outputs. Hallucinated facts, defamatory text, infringing images generated by deployed systems. These are not wrong targets. But they miss the most consequential phase of the AI lifecycle.</p>
<p>Between raw data and model output lies a process most legal scholars have not examined. Self-supervised learning (SSL) is the computational mechanism by which modern foundation models transform internet-scale text and images into mathematical weights. It is where bias sediments, where private data is compressed into recoverable form, and where structural defects are cast into a model long before any fine-tuning or deployment decision can correct them. It is the foundry. What comes out of it is determined by what happens inside it. And right now, no liability framework reaches inside.</p>
<p>I argue that SSL creates a category of risk I call Representational Risk. These are defects that cannot be remediated by downstream actors because they are encoded at the representational level, in the learned geometry of the model itself. A distinct liability framework, targeting what I call the Base Model Provider, is required. The normative foundation for that framework already exists. The AI Data Stewardship principles developed at CodeX provide the appropriate baseline.</p>
<p><b>Why SSL Changes the Legal Calculus</b></p>
<p>Legal scholarship has begun to examine the latent space as a site of doctrinal interest. BJ Ard&rsquo;s <i>Copyright&rsquo;s Latent Space: Generative AI and the Limits of Fair Use</i> (110 Cornell L. Rev. 2025) argues that fair use doctrine should account for how generative AI models extract what Ard calls &ldquo;non-authorial value,&rdquo; the facts, tropes, and structural patterns that exist independently of any artist&rsquo;s creative choices. The analysis is grounded in intellectual property, namely who owns what the latent space contains, and whether training on it constitutes infringement. This post occupies a different position. Where Ard examines ownership of value encoded in the latent space, I examine responsibility for harms cast there. The latent space, on this account, is not primarily a repository of extractable information. It is a foundry. The question is not who owns what it holds. It is who bears liability for the defects it produces.</p>
<p>Traditional supervised machine learning requires labeled data. A human annotator marks images as &ldquo;cat&rdquo; or &ldquo;not cat,&rdquo; and the model learns to replicate that judgment. The legal implications are relatively tractable. Annotation decisions are human choices that can be audited, and model behavior is constrained by the label set.</p>
<p>SSL discards labels. It trains models on self-generated prediction tasks. Masked language modeling teaches a model to predict a missing word from surrounding context. Contrastive learning teaches it to recognize that two augmented views of the same image are more similar than two random images. These objectives require no human curation of meaning. They require only data, enormous quantities of it, scraped from the open web.</p>
<p>This shift has two legal consequences that supervised learning does not generate.</p>
<p>First, SSL learns implicit structure from the training corpus. The model does not learn what humans have decided to call things. It learns the statistical relationships embedded in how humans actually write, what images they produce alongside what text, what appears near what. The resulting representations encode cultural assumptions, demographic patterns, and factual associations that no annotator ever reviewed or approved.</p>
<p>Second, SSL models memorize. Research by Carlini and colleagues has demonstrated that large language models trained with SSL will, under appropriate prompting, reproduce verbatim text from their training data, including private phone numbers, email addresses, and personal health information. The memorization is not incidental to an otherwise clean process. It is a feature of how SSL achieves generalization. The model must retain sufficient specificity about training examples to successfully predict their masked elements.</p>
<p>Third, and most consequentially for liability theory, SSL produces what ML researchers call a world model. A supervised model learns to replicate human judgments within a defined label set. An SSL model learns a functional representation of how the world works. It absorbs semantic relationships, causal associations, factual co-occurrences, and cultural patterns, all derived from data without any human having approved the resulting structure. The world model is not a lookup table. It is an internal map of reality, built from whatever the training corpus contained, that the model uses to reason across novel situations it has never encountered.</p>
<p>This distinction matters legally because it determines what fine-tuning can and cannot fix. A fine-tuner who adjusts a corrupted world model is not correcting the map. It is changing where the navigation starts. The underlying representation of the territory remains wrong. Biases, factual errors, and poisoned associations encoded at the world model level persist through fine-tuning in ways that annotation-level errors in supervised systems do not.</p>
<p>The legal system has no doctrine that maps cleanly onto any of these three consequences. That gap is the problem this post addresses.</p>
<p><b>The SSL Risk Taxonomy</b></p>
<p><b>A. The Stewardship of Unlabeled Data</b></p>
<p>The typical SSL pipeline begins with Common Crawl, a freely available scrape of approximately four billion web pages, collected without quality or content filtering beyond technical deduplication. GPT-3 was trained substantially on Common Crawl. So were BERT and most of their successors.</p>
<p>Common Crawl contains everything the web contains. Medical misinformation, demographic stereotypes, extremist content, and factual errors accumulated over years of archiving are all present. When an SSL model trains on this corpus, these patterns are not incidentally absorbed. They are structurally encoded into the model&rsquo;s representation of language. A base model trained on unvetted Common Crawl data does not merely reflect the web&rsquo;s biases. It builds a world model from them. The spatial relationships that govern what the model treats as similar, relevant, or probable are derived directly from the statistical structure of whatever the corpus contained. A world model built from Common Crawl is a map of reality as the unfiltered internet represents it. That is not a starting point a fine-tuner can correct by adjusting a few layers of weights.</p>
<p>The applicable legal theory is negligent entrustment. A developer who uses an unvetted, unfiltered corpus for SSL training has entrusted a computational process with data that a reasonable actor would recognize as generating predictable harm. Establishing a duty of care requires foreseeability of downstream use. A developer training a base model on Common Crawl in 2024 knows the model will be used in medical, legal, and financial contexts. The harm from biased representations in those contexts is not speculative.</p>
<p>A skeptic will argue that Common Crawl is the only corpus large enough to achieve state-of-the-art SSL performance, making its use an industry standard rather than a negligent choice. The argument has surface appeal. But it misidentifies where the negligence lies. The negligence is not in using Common Crawl. It is in ingesting it without applying the AI-DSF&rsquo;s Foundational Controls. Data Provenance Protections, Data Threat Defenses, and Continuous Data Vulnerability Management exist precisely because large unfiltered corpora are the operational reality of SSL development. A developer who uses Common Crawl with documented provenance controls, anomaly detection, and pre-ingestion quality review is not negligent. A developer who uses it without those controls, knowing what the corpus contains, is. The distinction is between the data source and the discipline applied to it.</p>
<p><b>B. Memorization and the Right to Be Forgotten</b></p>
<p>The memorization problem sits at the intersection of SSL mechanics and privacy law.</p>
<p>GDPR Article 17 grants data subjects the right to erasure. CCPA provides a parallel right for California residents. Neither statute was drafted with neural network weights in mind. Both were drafted with databases in mind, records that can be located, identified, and deleted.</p>
<p>Whether a latent representation of personal data constitutes &ldquo;personal data&rdquo; within the meaning of these statutes is unresolved. The Article 29 Working Party&rsquo;s guidance suggests that data is &ldquo;personal&rdquo; if an individual can be identified from it, directly or indirectly. If Carlini-style extraction attacks can recover verbatim PII from SSL model weights, the argument that the weights contain personal data in the statutory sense is serious. The weights are not merely derived information in the way that an anonymized aggregate is derived information. They are, under specific conditions, a reproducible copy.</p>
<p>Regulators should treat recoverable memorization as per se statutory retention. If PII survives in extractable form within model weights, the right to erasure applies. The base model provider has an obligation either to demonstrate that memorized content is not extractable or to retrain without the relevant data. Neither obligation is cost-free. That is the point.</p>
<p><b>C. Algorithmic Poisoning and Product Liability</b></p>
<p>Backdoor attacks on SSL training sets are documented and reproducible. An adversary who can inject a small number of poisoned examples into a training corpus, achievable through contributions to Common Crawl or to widely-used open-source datasets, can install hidden triggers in the resulting model&rsquo;s latent space. When the trigger pattern appears at inference time, the model behaves in a manner the deployer did not intend and cannot readily detect.</p>
<p>The product liability framing is relatively straightforward. Under the Restatement (Third) of Torts, a product contains a manufacturing defect when it deviates from its intended design in a way that renders it unreasonably dangerous. A backdoored SSL model deviates from its intended design. The defect is latent. It is not detectable through standard evaluation. And it can produce serious harm in deployed systems.</p>
<p>The harder question is who bears liability when the poisoning occurs at the data level, before the developer has assumed possession of the affected training examples. Supply chain product liability provides precedent. A manufacturer who incorporates a defective component bears liability even if the defect originated upstream. The SSL base model provider, having chosen to train on an unvetted corpus without adversarial robustness evaluation, has made a decision that determines whether the defect reaches the downstream product.</p>
<p><b>D. Causal Hallucinations and the World Model as Design Defect</b></p>
<p>The three risks above involve language models. The world model problem extends further, and its physical consequences are more severe.</p>
<p>SSL is no longer limited to text. Systems such as Sora and JEPA learn world models from video. They infer not just semantic relationships but physical ones: how objects move, how forces propagate, how materials deform under stress. These are causal world models. They represent, in latent space, the developer&rsquo;s implicit claim about how physical reality behaves.</p>
<p>When a causal world model is used to train a robotic agent or an autonomous system, the legal stakes shift from bias and privacy to physical injury. A robot trained on an SSL world model that misrepresents the brittleness of glass, the stopping distance of a vehicle, or the load tolerance of a structural component is not operating on a statistical error. It is operating on a false physics. That is a design defect under the Restatement (Third) of Torts, section 2(b), which holds a product defective in design when the foreseeable risks of harm could have been reduced by a reasonable alternative design.</p>
<p>The reasonable alternative is a verified world model. A developer who conducts Latent Space Audits on physical representations before licensing a world model for use in robotic or autonomous systems can test whether the model&rsquo;s causal structure deviates from reality in ways that produce predictable harm. A developer who does not conduct those audits and licenses the model anyway has made a design choice. Product liability reaches that choice.</p>
<p>This extends Representational Risk beyond the informational domain. A world model is not merely a representation of language. It is a representation of causality. When causality is wrong and the error is actionable, the foundry metaphor takes on a different weight. What is cast in the foundry is not just a biased language map. It is, in some systems, a defective model of physical reality that will govern how machines act in the world.</p>
<p><b>Leveraging the AI Data Stewardship Framework</b></p>
<p>The AI Data Stewardship Framework (AI-DSF) provides the normative architecture for translating these liability theories into actionable obligations. The AI-DSF organizes its controls into three tiers. Basic Controls represent the baseline every organization should have. Foundational Controls apply to organizations with higher risk profiles. Organizational Controls focus on people and processes. Several of these controls map directly onto the SSL risk landscape.</p>
<p><b>Continuous Data Vulnerability Management</b> is a Basic Control requiring that pre-training and post-training procedures be &ldquo;executed, documented, and measured&rdquo; and that &ldquo;proven anomaly detection tools are continuously used.&rdquo; Applied to SSL, this control operationalizes the Latent Space Audit. Probing classifiers can test whether a model&rsquo;s representations encode demographic associations that no annotator reviewed or approved. Extraction-based evaluation can estimate memorization risk before a checkpoint is released. These are not speculative obligations. They describe procedures that the AI-DSF already requires and that SSL developers can implement today.</p>
<p>To make the Latent Space Audit legally operational, I propose that the AI-DSF require what I call a Representation-to-Risk certification: a documented record, produced before any downstream license is granted, that specifies which categories of representational bias were probed, which PII memorization tests were conducted, and what remediation was applied to identified risks. The certification functions as a Safety Data Sheet for the latent space. It gives downstream licensees a verified record of what they are inheriting, and it gives regulators and courts a documented standard against which to measure whether the Base Model Provider exercised reasonable care. A provider who cannot produce one has not satisfied the Continuous Data Vulnerability Management obligation.</p>
<p>The AI-DSF further requires that &ldquo;data deletion and data unlearning methodologies are readily available and implementable.&rdquo; This is a direct reference to Machine Unlearning, a technical approach to removing specific training examples from a model&rsquo;s learned behavior without full retraining. For the GDPR and CCPA memorization problem, Machine Unlearning is the AI-DSF&rsquo;s prescribed remediation mechanism. A Base Model Provider that has not implemented Machine Unlearning capabilities before encountering a right-to-erasure request has failed a Basic Control obligation.</p>
<p><b>Data Provenance Protections</b> is a Foundational Control that &ldquo;implements safeguards against data poisoning.&rdquo; This is the control that maps directly onto the backdoor attack risk. The AI-DSF requires that developers implement guardrails against &ldquo;unlicensed, unverified, and unintended data sets&rdquo; and maintain documented data provenance throughout the supply chain. A developer who trains on an unvetted Common Crawl corpus without provenance controls has not merely made a poor engineering choice. It has failed a Foundational Control that the AI-DSF identifies as necessary for organizations with elevated risk profiles. SSL developers, who are the Base Model Providers for the entire industry, are precisely that.</p>
<p><b>Data Threat Defenses</b>, also a Foundational Control, explicitly references NIST AI 100-2e2025, the Adversarial Machine Learning taxonomy. This control requires that developers identify and mitigate threats from internal and external sources and maintain alignment with adversarial robustness standards. The SSL poisoning scenario is not an edge case the AI-DSF failed to anticipate. It is a named threat category within the framework&rsquo;s own standard reference.</p>
<p><b>Data Inventory</b>, a Basic Control, governs how dataset diversity and sufficiency are established and monitored, and requires that data licensing requirements are reviewed and complied with prior to dataset ingestion. This is the control that addresses the Common Crawl problem at its source. A developer who ingests Common Crawl without reviewing its licensing status and content quality against a documented standard has bypassed a Basic Control before the SSL process has even begun.</p>
<p>The downstream relationship is addressed through the AI-DSF&rsquo;s supply chain standard. All supply chain members are subject to a meet-or-exceed requirement that corresponds to the organization&rsquo;s own policies. A Base Model Provider that documents its AI-DSF compliance and makes that documentation available to fine-tuners and deployers enables those downstream actors to verify what they are inheriting. Without that documentation, fine-tuners and deployers are operating blind. The AI-DSF treats this information asymmetry as a control failure, not merely a commercial inconvenience.</p>
<p>Finally, the AI-DSF explicitly identifies the FTC&rsquo;s power of algorithmic disgorgement as a consequence of stewardship failure. Algorithmic disgorgement means the destruction of the model. Not a fine. Not an injunction against future conduct. The deletion of the asset itself, along with every downstream product built on it. A developer who has trained an SSL base model on unlicensed or tainted data and has not followed the AI-DSF&rsquo;s controls has built its entire model investment on a foundation the FTC can legally dissolve. The SSL Safe Harbor proposed below is not merely a litigation defense. It is the only mechanism available to a Base Model Provider for protecting a billion-dollar research and development investment from a regulatory delete order. A company that has not implemented the AI-DSF&rsquo;s controls before the FTC begins its inquiry has no safe position from which to argue.</p>
<p><b>Proposed Liability Model: The SSL Safe Harbor</b></p>
<p>I propose a tiered liability framework organized around the distinction between Structural Defects and Instructional Defects.</p>
<p>Structural Defects originate in the SSL phase itself. They include biases encoded in the learned representations, memorized private data recoverable from the weights, and backdoor triggers installed through corpus poisoning. They are structural because they are encoded in the world model, the foundry&rsquo;s core output. A fine-tuner inherits that world model. It can adjust behavior at the margins. It cannot rebuild the map. These defects therefore persist through fine-tuning and cannot be remediated by downstream actors without access to and control over the base model. Liability for Structural Defects falls appropriately on the Base Model Provider.</p>
<p>Instructional Defects are introduced through fine-tuning, prompt design, or deployment decisions. A model fine-tuned to generate harmful content, deployed in a context for which its representational properties are unsuitable, or prompted in ways that elicit harmful outputs falls into this category. Liability for Instructional Defects falls appropriately on the Fine-Tuner or Deployer.</p>
<p>The Stewardship Defense is the incentive mechanism that makes this framework function. A Base Model Provider who has implemented the AI-DSF&rsquo;s controls during the SSL phase receives a rebuttable presumption of non-negligence as to Structural Defects. The operative obligations are specific. On the Basic Controls side, the provider must maintain documented Data Inventory practices with pre-ingestion licensing review, Continuous Data Vulnerability Management with anomaly detection and Machine Unlearning capabilities, and a Data Incident Response Plan covering poisoning and memorization events. On the Foundational Controls side, it must implement Data Provenance Protections with documented safeguards against poisoning, Data Threat Defenses aligned with the NIST adversarial machine learning taxonomy, and Audit and Control findings reported to senior management. Finally, at the Organizational level, it must implement a formal Data Stewardship Program with board-level oversight, and conduct Fuzzing Tests and Red Team exercises against its pre-training pipeline.</p>
<p>A provider who has satisfied these controls has done what the AI-DSF requires. The presumption of non-negligence follows. A plaintiff who can demonstrate that the provider knew of a specific risk, that the relevant control was designed to address that risk, and that the provider failed to implement or maintain that control can overcome the presumption. But the burden shifts. And that shift is precisely the incentive the SSL ecosystem currently lacks.</p>
<p>This structure mirrors the EU AI Act&rsquo;s conformity assessment mechanism and the NIST AI RMF&rsquo;s risk tiering, without requiring comprehensive regulatory adoption. Courts can develop the framework through common law without waiting for legislation. The AI-DSF already exists as a documented standard. Its controls are specific and auditable. The doctrinal infrastructure for a negligence per se argument, or at minimum a strong res ipsa inference, is available to courts willing to engage with it.</p>
<p><b>Closing the Pipeline Gap</b></p>
<p>Output-only AI regulation will fail, and it will fail predictably. A framework that holds deployers liable for what models say, without addressing what models are, treats the symptom while leaving the pathology unexamined. Every harmful output emerges from a representational substrate that was formed in the SSL phase, before any deployer or fine-tuner made a single decision. Holding only the deployer liable is like holding the driver of a car with defective brakes liable while exempting the manufacturer who built the braking system.</p>
<p>The SSL phase is where the model is made. It is where foundational decisions about data, representation, and structure determine everything that follows. A liability framework that reaches this phase is not merely more complete. It is more accurate about where the decisions actually occur and who actually makes them.</p>
<p>The global AI oversight conversation has focused on output monitoring, transparency requirements for deployers, and consumer-facing disclosure. These are not wrong. They are insufficient. The AI Data Stewardship Framework provides the tools to extend that conversation upstream, to the moment when raw data becomes latent representation and the structural properties of AI systems are cast.</p>
<p>The foundry cannot be exempt from inspection simply because the casting happens before anyone is watching.</p>]]></content>
	<updated>2026-03-06T15:13:18+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-06T15:13:18+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="artificial intelligence"/>

	<category term="eran kahana"/>

	<category term="self-supervised learning"/>

	<category term="world models"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-06:/281723</id>
	<link href="https://law.stanford.edu/2026/02/26/matthew-kerbis-practi-codex-group-meeting-february-26-2026/" rel="alternate" type="text/html"/>
	<title type="html">Mathew Kerbis – Practi – CodeX Group Meeting – February 26, 2026</title>
	<summary type="html"><![CDATA[<p>Mathew Kerbis, co-founder and CEO of Practi and host of the Law Subscribed podcast, presented to the...</p>]]></summary>
	<content type="html"><![CDATA[<p><span>Mathew Kerbis, co-founder and CEO of <a href="https://www.practi.ai/" target="_blank" rel="noopener noreferrer">Practi</a> and host of the <a href="https://www.youtube.com/@lawsubscribed" target="_blank" rel="noopener noreferrer">Law Subscribed podcast</a>, presented to the Stanford CodeX community on transforming law firm revenue from hourly billing to subscription-based models.&nbsp;</span></p>
<figure aria-describedby="caption-attachment-560477"><img fetchpriority="high" decoding="async" src="https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2.jpg" alt="Matthew Kerbis - Practi &ndash; CodeX Group Meeting &ndash; February 26, 2026" srcset="https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2.jpg 986w,https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2-300x192.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2-768x492.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2-125x80.jpg 125w,https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2-220x141.jpg 220w,https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2.jpg 986w,https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2-300x192.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2-768x492.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2-125x80.jpg 125w,https://law.stanford.edu/wp-content/uploads/2026/02/matthew-kerbis-practi-codex-group-meeting-february-26-2026-2-220x141.jpg 220w" sizes="(max-width: 986px) 100vw, 986px" referrerpolicy="no-referrer" loading="lazy"><figcaption>Mathew Kerbis &ndash; Practi</figcaption></figure>
<p><span>The discussion covers practical subscription pricing strategies (do-it-yourself/done-with-you/done-for-you tiers), the future role of human lawyers in an AI-powered legal landscape, and why the profession must adapt as consumers already demonstrate willingness to pay $20-$200/month for AI-assisted services.&nbsp;</span></p>
<p><span>Kerbis emphasizes that lawyers&rsquo; future competitive advantages will be taste, curation, relationship-building, and human judgment&mdash;capabilities that don&rsquo;t fit the billable hour model but align perfectly with subscription-based services.</span></p>
<p><a href="https://youtu.be/G3V0iRRpufI" rel="noopener noreferrer" target="_blank">Watch CodeX Group Meeting in Youtube</a></p>
<p><strong>Transcript</strong></p>
<p>Roland Vogl:<br>
<span>Welcome everyone to our CodeX group meeting. We have a great session today with Mathew Kerbis, who is the co-founder and CEO of Practi, which offers an alternative to hourly billing. W</span>elcome Mathew. I&rsquo;ll turn it over to you.</p>
<p><span>Mathew Kerbis:<br>
Thank you to Stanford and the CodeX community, and for everybody for having me. I last spoke to this community over two years&mdash;wait, three years ago. It was January of &rsquo;23. I was almost a year into having my own subscription-based law firm, and that was getting some press. But since then, I have won a couple of awards for the practice and for my podcast Law Subscribed, where I interview other lawyers or innovators in the space. I&rsquo;ve had Megan Ma on the podcast, and really a lot of things came to a head over me teaching lawyers how to use AI and adopt alternative business models for their law firm, where it ended up being&mdash;it was too much to run a practice, have a podcast, do one-on-one coaching, and all this teaching.</span></p>
<p><span>Actually, a software play kind of became&mdash;later became necessary to go from one-to-many and try to let other law firms and lawyers adopt the subscription model and alternative fee business structure for their practice. And just the way AI has been accelerating everything&mdash;you know, there&rsquo;s a lot of pressure coming down on law firms from clients, both sophisticated and unsophisticated. You know, Roland, you said it&rsquo;s very timely. I&rsquo;ve been beating this drum for over four years, and just finally AI has gotten good enough that it&rsquo;s really highlighting it.</span></p>
<p><span>With that, I am going to hop into just a quick presentation to set the table about what Practi is, talk about myself and my co-founder just for a quick second, and then show off the product. But at any point in time, if folks have questions&mdash;and I see my co-founder Shomari Ewing is here too&mdash;if anyone has any questions, yep, interrupt me, ask questions, disagree with me. It&rsquo;s okay. I&rsquo;ve heard it all. I&rsquo;m happy to have lively debate if it comes to that.</span></p>
<p><span>With that, though, I will share my screen here just to show off the quick deck. All right. Okay, just moving some things around on the screen here for me. Okay.</span></p>
<p><span>We&rsquo;re Practi, and we transform law firm revenue from hourly to subscription. Maybe you&rsquo;ve asked yourself this. Maybe a client has asked this of you: &ldquo;Should my lawyer be using AI to save time and reduce my legal bill?&rdquo; Well, what Practi is, is the business model solution for this looming revenue crisis for billable hour law firms. That&rsquo;s because&mdash;this community has probably already seen this graphic&mdash;lawyers have 855, probably more because this is like a month or two old&mdash;now this chart: legal AI tools available to them from almost 700 different companies.</span></p>
<p><span>Right. Last year alone, $6 billion was invested into the legal tech ecosystem, and a lot of that was AI-related. And the data show that 80% of lawyers or more are using AI, whether it&rsquo;s these tools or the foundational tools. They know that it&rsquo;s reducing their time, the time it takes to get high-quality work done. But what Practi is&mdash;we are betting on a future where it doesn&rsquo;t matter if one of those legal AI solutions wins, or if the foundational models win, you know, with Claude and their legal plugins. Right. We don&rsquo;t care. We know there will be winners, though.</span></p>
<p><span>And we are focused on building for what comes next. According to Clio&rsquo;s data that they&rsquo;ve taken&mdash;their private data and applied it to the public data from O*NET&mdash;they found that AI can already automate 75% of a law firm&rsquo;s billable hours. It&rsquo;s a combination of lawyers and their staff, but that&rsquo;s revenue that&rsquo;s gone.</span></p>
<p><span>Practi&mdash;since we&rsquo;re trying to drive systemic change in the profession, and I&rsquo;ve had my own practice and I have legal tech companies trying to sell to me all the time&mdash;we&rsquo;re trying to make it really easy and really frictionless for lawyers to at least try this out and to get a sense of what the ROI is. That&rsquo;s why we let law firms sign up for free, create their account, build out subscriptions for free, and get on calls with me and other users on the platform to strategize good subscription billing techniques and strategies.</span></p>
<p><span>And then after that, while we&rsquo;re in this early access phase, we&rsquo;re just charging $20 a month, right? We&rsquo;re trying to keep it super affordable, super reasonable in these early stages. Later this year it will go up to $100 a month, but even then we only charge after the second client subscribes. We want to help law firms make more money before we even start charging anything.</span></p>
<p><span>And I will say here, just to note at the bottom, yeah, our target market&mdash;we&rsquo;re trying to help the solo small firm attorneys. And we believe that with the Harveys and the Casetext of the world starting to go into enterprise markets, we believe part of the reason they&rsquo;re doing that is because Big Law won&rsquo;t need as many lawyers to continue to be Big Law and generate the revenues that they&rsquo;re generating in light of these AI efficiencies. We&rsquo;re going to be there to help Big Law attorneys that either get fired or decide to get off the billable hour hamster wheel soon. And that&rsquo;s another reason for our pricing, because we&rsquo;re really targeting those smaller law firms in the future&mdash;smaller and solo law firms from attorneys who are leaving bigger law in light of these AI efficiencies.</span></p>
<p><span>Rght now this is per firm because, again, we&rsquo;re targeting solo small firms. Realistically, for firms on our platform to make it&mdash;the way that the product is designed, they may only have a handful of attorneys. We have interest from one mid-sized firm, but the way that we might have to set it up to work for multiple attorneys would be&mdash;it would ultimately mean they need multiple accounts. While it&rsquo;s not meant to be set up as per-seat pricing right now, this is just per-firm pricing with the expectation that most firms will be solos or smaller, very much on the smaller side. Yeah, because we also predict that that&rsquo;s the future of actually what the vast majority of the legal&mdash;of the law firm delivery model will be, will be these solo and small firms in light of AI efficiencies and the ability to do more work at higher quality and faster.</span></p>
<p><span>We&rsquo;ve been building since last fall, and my firm, Subscription Attorney LLC, has been using it exclusively to sign up and charge clients since November. It&rsquo;s fun to be a guinea pig&mdash;dog food&mdash;at your legal tech startup. But we soft-launched just about a month and a half ago, we&rsquo;ve got some early traction. We&rsquo;re not spending any money on marketing yet. We&rsquo;re still having that direct communication with early users. But we&rsquo;ve seen some good usage of the platform for just being out of stealth mode for just a month and a half.</span></p>
<p><span>For those of you who don&rsquo;t know me, I go by The Subscription Attorney. I adopted that brand about four-plus years ago, and I have a podcast. Like I mentioned, I create a lot of content around this. I do a lot of teaching for free, basically, and guest lecturing. I also do&mdash;I&rsquo;ve done paid workshops and trainings for bar associations and law firms. I am the subject matter expert in the space, but not just because of me and my stuff, but I have guests on my podcast. I&rsquo;ve had over 100 lawyers who are leveraging subscription in some form or another.</span></p>
<p><span>Then my technical co-founder, who again is here, Shomari Ewing, he&rsquo;s got over a decade of software development experience. He&rsquo;s worked at companies like Google and Amazon Web Services, and he also has an MBA and a sales background. That&rsquo;s always nice to have a technical person who actually understands business.</span></p>
<p><span>Again, we&rsquo;re Practi. We transform law firm revenue from hourly to subscription. And with that, I&rsquo;ll just show you the product, and I&rsquo;ll be happy to entertain any questions.</span></p>
<p><span>Oh yeah, when I&rsquo;m sharing my screen, it&rsquo;s hard for me to see the comments here. Let me look into the chat. &ldquo;Have you considered creating an insurance-based subscription scheme that aligns the ability to help people reduce their future prospective problems or not being compliant with unlicensed practice of law while also providing leads for retrospective problems?&rdquo;</span></p>
<p><span>You&rsquo;re asking&mdash;I mean, we are building a software solution that&rsquo;s essentially Shopify for SMB law firms. But you&rsquo;re asking about an insurance product. You know, I mean, look, if anyone&rsquo;s out there and they watch this on YouTube later and you&rsquo;re a professional liability insurance carrier and you want to create a particular insurance product for something like this, I mean, we&rsquo;re happy to have a conversation. I think existing professional liability covers the type of work that&rsquo;s being done, especially when you have jurisdictions like my jurisdiction in Illinois that are changing their Rule 1.5 on fees to expressly allow alternative fees.</span></p>
<p><span>Also, it was&mdash;I believe it was last year&mdash;the Illinois Supreme Court promulgated Rule 302 of the rules of professional conduct, which say if you&rsquo;re not billing by the hour and you&rsquo;re using a fixed fee, you could recover that without tracking your time. I think we&rsquo;re going to see a lot more jurisdictions allow for express adoption of alternative fees, even though if you probably haven&rsquo;t read your rules on fees lately&mdash;if you are an attorney, a majority of them allow for fixed fees. And the way that I encourage lawyers to use subscription is it&rsquo;s a recurring fixed fee, whether that be on a monthly basis, quarterly, or annually. Right. But it&rsquo;s a recurring fixed fee.</span></p>
<p><span>All right, let me share the product. Yeah, I&rsquo;m curious to see what&mdash;I got a demo now. But one of the members of this community who works for a Big Law firm running AI and machine learning for them, she described a scenario where she said, &ldquo;Okay, her firm is helping companies in a cyber breach situation, right?&rdquo; They know the playbook&mdash;you know, what do you do? You have to go to all the states and inform the authorities and on, right? And it can quickly generate&mdash;cost a client like $1 million in legal fees, right, if you have a big cyber breach. Right.</span></p>
<p><span>And they thought, &ldquo;Well, what if instead of that, we have a product for our clients where our clients pay us like, say, $100,000 a year as a subscription fee?&rdquo; They will pay that even if there&rsquo;s no cyber breach. But if there is a cyber attack, then the firm will handle everything, all the legal stuff that&rsquo;s entailed, right, even if it costs them more than $100,000. I think that&rsquo;s kind of going in that direction of creating an insurance-like type&mdash;turning a legal service into like an insurance type of&mdash;I don&rsquo;t know if there are many practice areas that are accessible to that, but this cyber breach thing is like one, for example.</span></p>
<p><span>I think that&rsquo;s a fantastic example. And I think that, you know, it is legally separate and distinct from actual insurance, right. But having a lawyer that you subscribe to that&rsquo;s in the problem-avoiding space, that&rsquo;s in the fire prevention space, right, rather than in the putting-out-fires phase, or you only contact them when there&rsquo;s a problem, right&mdash;because if that client is paying that law firm 100Kayearminimumannualsubscription,right,yeah,somemembershipbenefitsofthat100K a year minimum annual subscription, right, yeah, some membership benefits of that 100Kayearminimumannualsubscription,right,yeah,somemembershipbenefitsofthat100K a year better include &ldquo;we contact our lawyer to avoid ending up with a cybersecurity incident in the first place,&rdquo; I would hope, without being on the clock, because maybe it could be avoided.</span></p>
<p><span>And then when you get to the costs part of that conversation, when you bill by the hour, it&rsquo;s hard to not think of things in terms of cost accounting, with &ldquo;Oh, that took ten hours. My hourly rate is $1,000 an hour, you do the multiple,&rdquo; right. Like, no&mdash;that&rsquo;s what you would make. That&rsquo;s more opportunity cost than it is what your actual costs are, what your actual costs are, because you could&mdash;not necessarily fudge the numbers, but depending on how you present the numbers of cost accounting&mdash;Ron Baker does a great presentation on this. I highly encourage you to seek that out. He did it on my podcast last year.</span></p>
<p><span>But it&rsquo;s, &ldquo;What does it cost to run your business, to pay your people, to pay to subscribe to your software?&rdquo; Right. It&rsquo;s not about how long something takes being your cost. It&rsquo;s about what does it actually cost to pay your people and have your software subscriptions. And then if you&rsquo;re tracking time, it&rsquo;s not about how long it took&mdash;&rdquo;that cost us that much.&rdquo; It&rsquo;s about, &ldquo;Okay, that took ten hours. How do we make it take nine hours, five hours, ten minutes? Automate it.&rdquo; Right. And if we could continue to charge and command that price because that&rsquo;s valuable for our client, then it doesn&rsquo;t matter how long it takes because we could try to automate it.</span></p>
<p><span>Getting to the product&mdash;Practi&mdash;how is the website? You know, it&rsquo;s live. Lawyers could sign up. Again, it&rsquo;s free to set up your account. You know, we&rsquo;re just getting the newsletter off the ground. If you want to sign up, you can, or you can even book a call with me and Shomari if you want to follow up. That&rsquo;s all on the website.</span></p>
<p><span>When you make an account&mdash;I&rsquo;ve just got a demo account set up here&mdash;you will be asked to either sign up for or connect your Stripe account. Right now we&rsquo;re built on top of Stripe. Stripe&rsquo;s secure. It&rsquo;s a leader in the space. It&rsquo;s one of the reasons why we&rsquo;re building on Stripe first. We have been asked, &ldquo;Will we integrate with other payment partners?&rdquo; And, you know, that&rsquo;s a maybe. We are still very early days in the first few months of operating as a company, we want to stick with the best of the best for now.</span></p>
<p><span>And after you connect your Stripe account, this will be a blank page, but you could set this up before setting up a Stripe account. But we have some templates. The templates will get better over time, right. Practi is not selling AI to law firms because there&rsquo;s enough of that already. We are using AI, right. We&rsquo;re an AI-native startup. Shomari is using, you know, Cursor and Claude and, you know, to build and ship features faster than we could pre-generative AI being useful for that. I&rsquo;m using a boatload of AI tools. NotebookLM, Perplexity Pro are my daily drivers, right. What you saw earlier was Gamma, right. I&rsquo;m using Gemini. I&rsquo;m using a ton of AI. I think I&rsquo;m at 27 right now. Whisper Flow&mdash;if you haven&rsquo;t tried Whisper Flow yet, highly, highly recommend.</span></p>
<p><span>But in any event, we&rsquo;re going to be collecting data of what subscriptions are working, and eventually we&rsquo;ll have some sort of AI recommendations on what to build. Right now they&rsquo;re very generic, but some people find that it&rsquo;s easier to just add a template and then edit the template, right. I just added Homeowner Basic. You&rsquo;ll see here, Homeowner Basic is here. And if I wanted to, now I could come in and I could edit this instead of just starting from scratch.</span></p>
<p><span>But for law firms that are already leveraging subscription, or they already have some ideas, they could just add custom subscriptions. The point here is that we&rsquo;re trying to be as flexible as possible for law firms to offer whatever sort of subscription packages they want without giving you much freedom that it&rsquo;s paralyzing, right. We just let you name it, put in a price. If you want to have a tagline or a description, or you just want to talk about the member benefits, right&mdash;I&rsquo;m showing you the demo on the back end. I will show some front-end live pages of actual law firms, including my own, that are actually charging&mdash;what it looks like.</span></p>
<p><span>But for this demo page, whenever you make a change in the back end, it automatically pushes it out to the front end. It&rsquo;s actually kind of zoomed in a lot on my end. You&rsquo;ll see there&rsquo;s also standalone services down there. We&rsquo;re not trying to replace law firm websites, by the way. Lawyers spend a lot of money on websites, and I&rsquo;ll get to what we&rsquo;re offering for those websites here.</span></p>
<p><span>But going back to those lawyers who I know are going to be leaving Big Law or leaving other firms and starting their own practice&mdash;they&rsquo;ll have a URL and a way to make money. And as long as they&rsquo;re registering with their states, setting up their law firm&mdash;and we have a lot of content forthcoming about how to actually set up your law firm, right&mdash;they&rsquo;re getting their malpractice insurance, right. We don&rsquo;t help with the delivery or the business stuff yet, but you&rsquo;re getting a URL. You&rsquo;re getting a way to actually start charging clients on a recurring basis.</span></p>
<p><span>Mathew, can I ask you a question? Yeah. Okay, you have some templates that you have thought through before, like, you know, what would be, for example, a homeowner&rsquo;s type legal subscription&mdash;what would be in that, right. And but presumably every firm&rsquo;s a little different in the sense of the services they provide and understanding, like, because, you know, there&rsquo;s always some things that are just repeat business, right, that they get all the time&mdash;standardized stuff&mdash;and then some stuff that&rsquo;s more bespoke, right.</span></p>
<p><span>With your customers, these law firms, do you go through some kind of discovery in the sense of, &ldquo;Oh, in your firm, you might think about these parts of what you do as things that are even amenable to creating a subscription around, and here are the things that you should continue handling in a billable hour way&rdquo;? Is that part of what you offer?</span></p>
<p><span>We&rsquo;re more a software company right now than we are a consulting company, right. Like, as we grow, we&rsquo;re going to be hosting weekly 30-minute Zoom meetings with me as a group where we can talk those things through, because the chances are if one person&rsquo;s asking those questions, somebody else already is. But thanks to the power of AI, we actually have&mdash;I&rsquo;ve curated, it&rsquo;s right now over 250 sources, a lot of it from my podcast, right, and CLE presentation materials that I&rsquo;ve given&mdash;where you could come in to this link. And for folks who are watching live here on Zoom, I&rsquo;ll put it in the chat.</span></p>
<p><span>You do have to be signed into a Google account to access it because it saves your chat history, which you could delete if you want. You could ask it these questions, right. This is our hack right now to be like, &ldquo;Hey, yeah, I&rsquo;m only available once a week on these 30-minute Zooms with all the users.&rdquo; Well, you could ask this NotebookLM curated database that I&rsquo;ve put together as the subject matter expert about&mdash;hey, I&rsquo;m in immigration, right. We can even&mdash;I&rsquo;ll just give an example. You know, I&rsquo;m an immigration attorney. Oh, actually I&rsquo;ll use Whisper Flow. For those of you who don&rsquo;t know what Whisper Flow is, you basically just talk to your computer.</span></p>
<p><span>&ldquo;I&rsquo;m an immigration attorney and I&rsquo;m not sure how I should price my subscription and/or fixed-fee packages. What do you recommend?&rdquo;</span></p>
<p><span>And this is&mdash;no. And if you haven&rsquo;t come across NotebookLM yet, it&rsquo;s by far the best tool for $14 a month. This version that I&rsquo;m using to do a forward-facing sharing version of it&mdash;for NotebookLM, you have to do it from a Gmail account because Workspace and Enterprise and school accounts don&rsquo;t let you share outside your organization. It&rsquo;s $20 a month to have up to 300 sources and higher usage limits.</span></p>
<p><span>Here you go. Immigration attorneys often use different pricing models like that. And then it&rsquo;s pulling&mdash;it&rsquo;s telling you where it&rsquo;s pulling from for the actual sources, right. And that&rsquo;s really important. Now we are playing with whether or not we want to let people see the sources, but it&rsquo;s going to be able to tell you where that information is coming from. And it&rsquo;s not just like a made-up thing, right. It&rsquo;s not like just a chatbot making it up.</span></p>
<p><span>And actually, I think I want to share&mdash;yeah, I know I&rsquo;m doing this live, but I&rsquo;m going to share more than just this note&mdash;NotebookLM for this audience here. Hang on a second. I&rsquo;m moving it from chat-only to full notebook. Bear with me here because we did just build this out by request. And some of these&mdash;here, I&rsquo;ll refresh this now. I just sort of opened this up a little bit. &ldquo;Answer again with examples, please.&rdquo;</span></p>
<p><span>What&rsquo;s great about NotebookLM is you could create resources on the side here. Yeah. Well, it&rsquo;s a good thing NotebookLM is in my product because it seems to be having some issues right now, but that&rsquo;s our stopgap measure. You know, Roland, in the future we&rsquo;ll have more&mdash;I&rsquo;ll either be more available or we&rsquo;ll be able to productize those features a little bit more.</span></p>
<p><span>But basically, you&rsquo;re creating a platform where these different subscriptions can be offered through&mdash;all of them, however you want to structure it. That&rsquo;s the idea, right? Because like you said, there are different practice areas, there are different sizes, there are different ideal clients that law firms are serving. We believe we&rsquo;ve built a flexible enough platform that you can showcase whatever sort of subscriptions you want, right.</span></p>
<p><span>Here&rsquo;s Next Step Legal. This is a live firm that you could go to right now and you could check them out, right. And you could see&mdash;I&rsquo;m signed in. In a second here, I&rsquo;ll sign out. I just want to keep showing the back end a little bit because once you&rsquo;re signed in as a law firm, you can&rsquo;t actually subscribe to other law firms. You have to be not signed in or signed in as a client. Before I sign out, I just want to show some folks&mdash;here you put your engagement terms in here.</span></p>
<p><span>And again, I&rsquo;m not giving legal advice, but just suggesting as a best practice, it&rsquo;s always good to have. And we do allow $0-a-month subscriptions. I don&rsquo;t recommend it. I recommend at least $1 a month for fixed-fee shops, or like&mdash;oh, because we&rsquo;ve had some people say, &ldquo;Hey, I mostly just do fixed fee.&rdquo; Well, you could still engage your client. They&rsquo;ll agree to your subscription terms even if it&rsquo;s for $0. They&rsquo;ve agreed to your engagement terms even if it&rsquo;s for $0 or $1 a month. But then you could always go and follow up with them to have customized engagement terms that amend that, right. But this way they&rsquo;re going to be able to see what your engagement terms are when they sign up, right.</span></p>
<p><span>You copy and paste those in there. And by the way, mine are available for free at subscriptionattorney.com/#engagement. I put them out there for my clients, but I also put it out there for other lawyers that want to adapt it to their jurisdiction.</span></p>
<p><span>Then there are the fixed-fee services, right. We don&rsquo;t have templates for this yet, but that may be coming in the future. But for those fixed-fee, one-off transactional things that you want to charge to clients that aren&rsquo;t included, this is where you could build out those services as well. You can track the revenue that you&rsquo;re getting through your firm. This is a demo account, I&rsquo;m not showing you a live firm account. We don&rsquo;t have that fixed-fee revenue showing in the demo account. Shomari, that&rsquo;s something to figure out, maybe for the next presentation we give.</span></p>
<p><span>But we let you track the revenue. We let you see the clients and what they&rsquo;re subscribed at. We let you manage the clients. The clients can sign in, and I&rsquo;ll show you what the client-side dashboard looks like in a second. Or, you know, we&rsquo;re almost at time, you know, we have a widget button that you could use on a website where you could come to the website and you could click &ldquo;Sign Up.&rdquo; And I just want to be respectful of people&rsquo;s time.</span></p>
<p><span>And then this is the same checkout page if they were to click on your Practi page, right. Like, if I&rsquo;ve spent a lot of money on my website, we still let you connect it to your actual site. And if you&rsquo;ve ever checked out with Stripe before, that&rsquo;s going to look very familiar to you. And then, yeah, we let firms link back to their website and put in a scheduling link for their Calendly or something like that, right.</span></p>
<p><span>Yeah, I, again, I want to be respectful of everyone&rsquo;s time. You know, there are basic settings you could add here. But again, we&rsquo;re not trying to replace anyone&rsquo;s&mdash;you know, the rest of your tech stack. We&rsquo;re just trying to be a super simple, easy-to-use subscription revenue management platform.</span></p>
<p><span>With that, Shomari, thanks for answering people in the chat here. Yeah, sure. I think, thank you much. It seems like you tried to answer most of the questions that people in the group asked. Yeah. Do you have anything to add before&mdash;Mathew, do you have anything to add to what I&rsquo;ve said?</span></p>
<p><span>Yeah, I&rsquo;m going to need to catch up on it. Yeah, same here. Wow. But there was, you know, just kind of&mdash;yeah. In the same way that I&rsquo;ve curated that NotebookLM&mdash;that RAG-based chatbot&mdash;for users of Practi to query against the database of curated sources that I&rsquo;ve provided, you know, we&rsquo;re giving that away for free for now, right. I mean, we&rsquo;re still sort of testing it. You all got an early sneak peek at what that looks like.</span></p>
<p><span>But that&rsquo;s, I think, the value of subject matter experts&mdash;the attorneys or any other subject matter expert in this AI-automated, supercharged, powerful world. It&rsquo;s taste. Like, people have taste, and they maybe like working with somebody. Like, it&rsquo;s relationship. It&rsquo;s taste and it&rsquo;s curation. I think those are the future superpowers of lawyers and professional service providers. And it doesn&rsquo;t make sense to charge by the hour for those things because that doesn&rsquo;t take a long time. It might take years to acquire taste and years to acquire an understanding to be a good curator. But once you have that expertise and that taste that people are willing to pay for, I think subscription makes the most sense. And it&rsquo;s made sense before AI.</span></p>
<p><span>But we know&mdash;I just want to end with this, and then I&rsquo;ll take any&mdash;people can unmute and ask questions. We know that the time is ripe for subscription-based legal services because today, already today, if somebody has an AI subscription that they&rsquo;re paying for, they are asking it for legal advice. And it&rsquo;s maybe giving them legal information in exchange and giving a disclaimer that says &ldquo;contact a lawyer.&rdquo;</span></p>
<p><span>My firm has gotten inbound from those types of communications, right, that clients are having with their AI tool before they contact you. But we already know that between $20 and $200 a month is a price point people are willing to pay. They are asking these AI tools for legal advice. I think that means the profession has to be ready to find a way to adapt the subscription model to their practice. </span><span>With that, I&rsquo;ll open it up for questions from the audience.</span></p>
<p>Roland Vogl:<br>
<span>Thank you, Mathew. Yeah. I&rsquo;d like to ask the good question, which I think you answered already, but I think the question is, you know, what happens to the law firm pyramid if AI compresses junior associate-level work into AI systems? Do we need to train them differently or have a different revenue model entirely?</span></p>
<p><span>I don&rsquo;t know how you&rsquo;re thinking about that, right. Like, I agree with your premise that, you know, there will be a move away from the billable hour, which we have seen already&mdash;alternative fee arrangements, right, flat-fee billing and on, where the firm is encouraged to be more productive, you know, with its human resources. But there is also, you know, the question of, like, you know, what is the role of the human lawyer going forward, right? Like, AI can handle more and more legal tasks and, you know, do more and more reliably. But there is still this human oversight layer that&rsquo;s needed, right. Maybe the subscription at the end will be for that human oversight, right regardless of how it&rsquo;s paid, it is for that human oversight and context awareness. I don&rsquo;t know if you have any thoughts on that.</span></p>
<p><span>Mathew Kerbis:<br>
Yeah. There&rsquo;s a three-tiered subscription way to price anything, right. And subscription, especially in the services space&mdash;in the software space, there&rsquo;s good, better, best. And that can work with law firms. But I think what works with law firms even better for simple three-tiered pricing&mdash;though we certainly see more nuanced approaches on Practi, and I&rsquo;ve interviewed lawyers who have more nuanced approaches to this&mdash;but a good starting point for some law firms is the do-it-yourself, done-with-you, done-for-you three-tiered pricing structure, right.</span></p>
<p><span>And you charge more&mdash;and for the done-for-you, top-tier pricing structure, you don&rsquo;t just charge a simple multiple. That&rsquo;s going to be many multiples more than the do-it-yourself and done-with-you service, right. And sometimes that might even be faster service for most clients, not certain contexts. And I&rsquo;ve done both. The context where slower is better is foreclosure defense, sometimes insurance defense, right. If they get results faster, that&rsquo;s actually more valuable to them.</span></p>
<p><span>There&rsquo;s a UK expert in the space, Sean Jardine, who talks about how giving three-tiered pricing to your client for fixed-fee pricing&mdash;more on the fixed-fee pricing side of things&mdash;and he&rsquo;s like, if they need it and you have a month to work on it, it&rsquo;s the lowest price. If you want to get it to them by next week, that&rsquo;s a mid-tier price. If they need it in 48 hours and they&rsquo;re willing to pay that super high multiple, you give them that choice, right.</span></p>
<p><span>And in his coaching of lawyers, he&rsquo;s on record saying how clients pick the more expensive option to get things faster. And sometimes faster is more valuable, and clients are willing to pay more for it than what you&rsquo;d make on the billable hours. I still think there&rsquo;s that side of it.</span></p>
<p><span>I think the learning&mdash;going back to that part of your question&mdash;is, how do lawyers learn if they&rsquo;re not junior lawyers? Because that junior work can be automated by AI. Well, how do lawyers learn now? They learn because they do some work, they send it to the partner, and the partner marks it up and gives it back to them. And rinse and repeat, and rinse and repeat, and rinse and repeat.</span></p>
<p><span>You&rsquo;re going to be doing that with these AI tools. But instead of it taking hours and days and weeks and months to learn, we&rsquo;re going to compress that time. I&rsquo;m using three AI tools&mdash;well, four if you include Whisper Flow&mdash;but three that are actually helping me with my substantive legal work daily. And that&rsquo;s Perplexity Pro, NotebookLM Pro, and Paxton AI. I don&rsquo;t know if you guys have had Paxton on&mdash;they&rsquo;re a legal AI company. They&rsquo;ve raised some money. They have their own legal large language model in addition to being multi-model.</span></p>
<p><span>And I&rsquo;m using these three tools as though I have three team members. And sometimes I have multiple tabs open for multiple different files, and I&rsquo;m just going back and forth with these AI tools. And I&rsquo;m learning things, maybe about a particular nuance for a particular contract that maybe I didn&rsquo;t know before, even though I&rsquo;ve got over a decade of legal experience. But I think you&rsquo;re going to see this learning compacted as new lawyers learn to use these legal-specific AI tools.</span></p>
<p><span>Benjamin<br>
</span><span>And what I like to tell people is that the calculator didn&rsquo;t get rid of accountants, right. And it may be the case that we have like a legal tax calculator, but that&rsquo;s only part of the profession. Part of it is knowing sort of the series of events that&rsquo;s going to follow. There is no explicit law that says, you know, &ldquo;don&rsquo;t put soap on the floor.&rdquo; But we all know if you have soap on the floor and someone is walking by, they may very well slip on that soap, right. And the number of ways that you can foresee&mdash;or negligence being unforeseen&mdash;is almost limitless. And you can&rsquo;t just explicitly put it into an AI such as this.</span></p>
<p><span>And I think that with regards to the representation, a lot of the representation has to do with being able to foresee the results of your actions&mdash;the consequences of your actions&mdash;and other people not being happy with it. And I feel like that human factor is going to be needed, and it&rsquo;s just going to take the role of the legal tax calculator, right, where we all want to know what the rules of the game are. We should all sort of agree, but that doesn&rsquo;t tell me how I&rsquo;m going to interact with the world and how I&rsquo;m going to resolve conflicts with people in that world.</span></p>
<p><span>Mathew Kerbis:<br>
I don&rsquo;t think the role of the lawyer&rsquo;s going anywhere. I just think it&rsquo;s going to be different, right. I mean, if you think about, you know, pre-motor vehicle, it was people&rsquo;s jobs to shovel horse manure out of the roads, right. I don&rsquo;t think anyone&rsquo;s complaining that that work has gone away. It used to be&mdash;somebody, before electricity, it was someone&rsquo;s job to replace oil in lanterns at establishments, right. No one&rsquo;s complaining that that job has gone away, right.</span></p>
<p><span>I think that the writing things from scratch that we do as lawyers&mdash;and we don&rsquo;t always write from scratch; we write from templates or similar docs, or, you know, we find other ways to do this. Maybe we have automations built out in a tool like HotDocs or something, right. But we&rsquo;re still doing a lot of manual work. And I think all that gets automated, and we&rsquo;re billing time for it, and clients are paying it, but they&rsquo;d rather not.</span></p>
<p><span>Paige:<br>
</span><span>This reminds me of and sort of touches on what you were just saying about Benjamin&rsquo;s question as well. It&rsquo;s felt like the billable hour&mdash;we know it has been going the way of the dinosaur for a while, and I feel like this new age we&rsquo;re in is just really&mdash;it feels like, you know, what rideshare did to the taxi system, where it just fundamentally undermined the basis of the whole system in a way that it had to change largely.</span></p>
<p><span>I&rsquo;m curious what you see going&mdash;where do you see the future of the billable hour and these alternative structures? Like, what do you see the new standard being? For some context there, like, I&rsquo;ve worked in&mdash;I&rsquo;m not an attorney, I&rsquo;m a paralegal&mdash;and I&rsquo;ve worked in many areas. I&rsquo;ve worked in areas of law that are very much billable-hours dependent. I&rsquo;ve worked in other areas where it&rsquo;s only flat-fee models. Nobody&rsquo;s doing hourly. Where do you see kind of the new standard for fee structure and the billable hour being in ten years?</span></p>
<p><span>Mathew Keribs:<br>
Yeah. I&rsquo;ve been having a lot of conversations on my podcast about this lately, and I tend to record live on LinkedIn. And people are giving the estimate&mdash;not me, although I agree with the estimate&mdash;3 to 5 years, and the billable hour would no longer be the dominant business model in 3 to 5 years, right. And again, that&rsquo;s just sort of like experts pulling their opinion out of, you know, where. But like, our intuitive sense is 3 to 5 years before it&rsquo;s not the dominant business model.</span></p>
<p><span>The reason it&rsquo;s survived for long, even though people have said it&rsquo;s going to go away, is partially because it&rsquo;s been profitable despite there being better ways. But that profit number is going to start going down. To make up for it, rates will go up, right? The Wall Street Journal article&mdash;$3,400 an hour is out there now. Billable rates out there. But eventually, when push comes to shove, in my class I actually give math examples of, like, if you really have a 90% time savings, you&rsquo;re going to have a 50X rate&mdash;$25,000 an hour from $500 an hour&mdash;which is probably an unreasonable fee, and certainly no client&rsquo;s going to want to pay it, right.</span></p>
<p><span>You have certain economic forces at play here. Finally, thanks to AI efficiencies being exponential, I think that the reason flat fee has not supplanted billable hours is because of the underscoping/overscoping problem. And I get this question all the time from lawyers who are like, &ldquo;Well, I&rsquo;m not sure what to do. I&rsquo;m in litigation or I&rsquo;m in complicated M&amp;A transactions, and like, how am I supposed to handle it if things start taking a lot longer or if it gets more complicated?&rdquo;</span></p>
<p><span>That&rsquo;s what subscription tiers are for, right. And your engagement agreement could say, &ldquo;Okay, it&rsquo;s an uncontested divorce. It&rsquo;s $2,000 a month. If it goes to contested, it goes up to $5,000 or $10,000 a month,&rdquo; right. That&rsquo;s what subscription tiers are for. And if it&rsquo;s even less complicated than we thought it was, we bump you down to tier one, right. You go down a subscription level. Subscriptions solve the underscoping/overscoping problem. At least it solves it as close, I think, as we can, right.</span></p>
<p><span>Yeah. And then you&rsquo;re incentivized to just use the tech, use the tools, and adopt the technology. Like, we&rsquo;re not selling AI, but we are friends with every legal AI company. Why? Because we want the lawyers to use the tech, because we think&mdash;I think ethically, just my opinion, not ethics advice&mdash;Rule 1.5, Comment 5, I believe, for the Model Rules, Comment 5 says you can&rsquo;t bill and use wasteful procedures if you&rsquo;re billing by the hour. That&rsquo;s the implication. If you can use AI to make a ten-hour task take ten minutes, you have to do it. It&rsquo;s already in our ethics. It&rsquo;s just not being enforced right now. Yeah, I think it&rsquo;s just a matter of time for that.</span></p>
<p><span>Sorry, Roland. Go ahead. I know we&rsquo;re over here&mdash;ten minutes over time.</span></p>
<p><span>Roland Vogl:<br>
It&rsquo;s such a fascinating discussion. Not just your business which I think is very interesting and really showing a path forward for legal service delivery, but everything else you taught us about thinking about how to measure the value that lawyers provide and scoping and all of that. I think there was a lot of interesting stuff there.<br>
</span></p>
<p><span>I really appreciate you coming to join the group today and giving us an update and telling us about Practi. Please keep us posted in the future. And yeah, everyone else, thank you for all the great questions and the good conversation. And yeah, I look forward to seeing you all next time.</span></p>
<p>&nbsp;</p>]]></content>
	<updated>2026-02-26T18:16:11+00:00</updated>
	<author><name>CodeX</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-26T18:16:11+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="codex"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-05:/281665</id>
	<link href="https://law.stanford.edu/2026/03/05/the-erca-join-the-stanford-computational-antitrust-project/" rel="alternate" type="text/html"/>
	<title type="html">The ERCA joins the Stanford Computational Antitrust project</title>
	<summary type="html"><![CDATA[<p>The ECOWAS Regional Competition Authority (ERCA) has joined the Stanford Computational Antitrust pro...</p>]]></summary>
	<content type="html"><![CDATA[<p>The <a href="https://erca-arcc.org/" target="_blank" rel="noopener noreferrer">ECOWAS Regional Competition Authority (ERCA)</a> has joined the <a href="https://law.stanford.edu/codex-the-stanford-center-for-legal-informatics/projects/computational-antitrust/" rel="noopener noreferrer" target="_blank">Stanford Computational Antitrust project</a> today. We are delighted to welcome them.</p>
<p>The ERCA is the competition agencies of the Economic Community of West African States. It covers fifteen Member States across West Africa (Benin, Cabo Verde, C&ocirc;te d&rsquo;Ivoire, The Gambia, Ghana, Guinea, Guinea-Bissau, Liberia, Nigeria, Senegal, Sierra Leone, and Togo) and operates from Gambia. It handles mergers, anticompetitive practices, and is now developing a regional framework on consumer protection. In short, it is building competition enforcement at continental scale. That is no small task.</p>
<p>The Stanford Computational Antitrust project now brings together over 70 competition agencies from around the world. The ambition is straightforward, that is, help agencies understand and use computational tools in competition analysis and enforcement. The network grows because the need is real. Digital markets do not wait for institutions to catch up.</p>
<p>The ERCA will contribute to our research reports and feed into the collective body of knowledge we are building. We will connect them with peer agencies in the network, some facing similar institutional contexts, others offering tools and experience ERCA can build on. We look forward to workshops and training sessions together. More than that, we look forward to learning from them. West Africa&rsquo;s telecommunications, digital platforms, and cross-border trade markets raise competition questions that computational tools are well-suited to address.</p>
<p>The project is stronger for this addition. Welcome.</p>]]></content>
	<updated>2026-03-05T16:14:26+00:00</updated>
	<author><name>Thibault Schrepel</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-05T16:14:26+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="computational antitrust"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-04:/281599</id>
	<link href="https://www.gautrais.com/blogue/2026/03/04/larret-whatsapp-ireland-ltd-c-comite-europeen-de-la-protection-des-donnees/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=larret-whatsapp-ireland-ltd-c-comite-europeen-de-la-protection-des-donnees" rel="alternate" type="text/html"/>
	<title type="html">L’arrêt WhatsApp Ireland Ltd. c. Comité européen de la  protection des données</title>
	<summary type="html"><![CDATA[<p>Aziz Fatnassi est &eacute;tudiant dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)&nbsp;
Apr...</p>]]></summary>
	<content type="html"><![CDATA[<p><strong><a href="https://www.gautrais.com/files/sites/185/2026/03/2025.03_AzizCarre-225x225-1.jpg" rel="noopener noreferrer" target="_blank"><img decoding="async" src="https://www.gautrais.com/files/sites/185/2026/03/2025.03_AzizCarre-225x225-1.jpg" alt="" referrerpolicy="no-referrer" loading="lazy"></a>Aziz Fatnassi est &eacute;tudiant dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)&nbsp;</strong></p>
<p>Apr&egrave;s cinq longues ann&eacute;es de contentieux opposant WhatsApp Ireland Ltd (ci-apr&egrave;s &laquo;&nbsp;WhatsApp&nbsp;&raquo;) au Comit&eacute; europ&eacute;en de la protection des donn&eacute;es (CEPD), &agrave; l&rsquo;origine d&rsquo;une amende de 225 millions d&rsquo;euros inflig&eacute;e &agrave; la filiale de Meta, la Cour de justice de l&rsquo;Union europ&eacute;enne (CJUE) a jug&eacute;, le 10 f&eacute;vrier 2026, que l&rsquo;entreprise est fond&eacute;e &agrave; introduire un recours devant les juridictions de l&rsquo;Union contre la d&eacute;cision du CEPD, quand bien m&ecirc;me celle-ci serait adress&eacute;e aux autorit&eacute;s nationales charg&eacute;es de la protection des donn&eacute;es, d&egrave;s lors qu&rsquo;elle affecte directement ses activit&eacute;s, et a renvoy&eacute;, en cons&eacute;quence, le dossier devant le Tribunal de premi&egrave;re instance des Communaut&eacute;s europ&eacute;ennes (TPICE) pour statuer sur le fond de l&rsquo;affaire.</p>
<h4><strong>Histoire des origines, origine de l&rsquo;histoire</strong></h4>
<p><strong>&nbsp;</strong>Peu apr&egrave;s l&rsquo;entr&eacute;e en vigueur du <a href="https://eur-lex.europa.eu/eli/reg/2016/679/oj?locale=fr" rel="noopener noreferrer" target="_blank">R&egrave;glement europ&eacute;en de protection des donn&eacute;es (RGPD)</a>, et pour donner suite &agrave; plusieurs plaintes concernant le traitement de donn&eacute;es &agrave; caract&egrave;re personnel op&eacute;r&eacute; par l&rsquo;application de messagerie WhatsApp, le r&eacute;gulateur irlandais charg&eacute; de la protection des donn&eacute;es pour la plupart des g&eacute;ants technologiques am&eacute;ricains (leurs si&egrave;ges europ&eacute;ens &eacute;tant situ&eacute;s en Irlande) a <a href="https://www.dataprotection.ie/sites/default/files/uploads/2022-03/Full_decision_WhatsApp_Ireland-August_2021.pdf" rel="noopener noreferrer" target="_blank">ouvert le 10 d&eacute;cembre 2018 une enqu&ecirc;te &agrave; caract&egrave;re g&eacute;n&eacute;ral sur le respect par l&rsquo;entreprise des obligations de transparence et d&rsquo;information &agrave; l&rsquo;&eacute;gard des particuliers en vertu des articles 12, 13 et 14 du RGPD</a>.</p>
<p><a href="https://www.privacy-regulation.eu/fr/60.htm" rel="noopener noreferrer" target="_blank">Comme le veut la proc&eacute;dure</a>, d&egrave;s la cl&ocirc;ture de l&rsquo;enqu&ecirc;te men&eacute;e par l&rsquo;autorit&eacute; irlandaise, celle-ci a pr&eacute;sent&eacute; le 24 d&eacute;cembre 2020 &agrave; ses homologues des diff&eacute;rents &Eacute;tats membres concern&eacute;s par le traitement des donn&eacute;es personnelles de WhatsApp un projet de d&eacute;cision qui a soulev&eacute; plusieurs objections, notamment de la part des autorit&eacute;s allemande, fran&ccedil;aise, hongroise, n&eacute;erlandaise et portugaise concernant, entre autres, les atteintes identifi&eacute;es et le caract&egrave;re appropri&eacute; des mesures correctives envisag&eacute;es. S&rsquo;ensuivit une phase d&rsquo;&eacute;changes de commentaires entre les autorit&eacute;s concern&eacute;es, au terme de laquelle aucun consensus ne se d&eacute;gagea.</p>
<p><a href="https://www.edpb.europa.eu/news/news/2021/edpb-adopts-art-65-decision-regarding-whatsapp-ireland_fr?utm_source=chatgpt.com" rel="noopener noreferrer" target="_blank">L&rsquo;autorit&eacute; irlandaise n&rsquo;ayant retenu aucune des objections, saisit en cons&eacute;quence le CEPD conform&eacute;ment &agrave; la proc&eacute;dure instaur&eacute;e par le RGPD</a>, qui proc&egrave;de le 23 avril 2021 &agrave; l&rsquo;audition de WhatsApp et rend le 28 juillet 2021 une d&eacute;cision contraignante &agrave; l&rsquo;&eacute;gard de l&rsquo;ensemble des autorit&eacute;s concern&eacute;es dans laquelle il constate des violations du RGPD et ordonne &agrave; l&rsquo;autorit&eacute; irlandaise de modifier notamment le montant du projet d&rsquo;amende, qui s&rsquo;&eacute;l&egrave;ve d&eacute;sormais &agrave; 225 millions d&rsquo;euros.</p>
<p>Ce n&rsquo;est que le 2 septembre 2021 que l&rsquo;autorit&eacute; irlandaise, se conformant &agrave; la d&eacute;cision contraignante du CEPD, infligea &agrave; WhatsApp une amende de 225 millions d&rsquo;euros, l&rsquo;une des amendes les plus &eacute;lev&eacute;es jamais prononc&eacute;es au titre du RGPD, consid&eacute;rant que la filiale de Meta avait failli &agrave; ses obligations de transparence &agrave; la fois envers ses utilisateurs et les non-utilisateurs dont le num&eacute;ro de t&eacute;l&eacute;phone avait fait l&rsquo;objet d&rsquo;un traitement.</p>
<p>Insatisfait, WhatsApp a introduit un recours en annulation de la d&eacute;cision du CEPD devant le Tribunal de premi&egrave;re instance des Communaut&eacute;s europ&eacute;ennes qui, par ordonnance du 7 d&eacute;cembre 2022, l&rsquo;a d&eacute;clar&eacute; <a href="https://curia.europa.eu/site/upload/docs/application/pdf/2022-12/cp220196fr.pdf" rel="noopener noreferrer" target="_blank">irrecevable au motif qu&rsquo;elle ne constitue pas un acte attaquable</a> et que WhatsApp n&rsquo;&eacute;tait pas directement concern&eacute;e, estimant qu&rsquo;elle ne rev&ecirc;tait qu&rsquo;un caract&egrave;re interm&eacute;diaire et que seule la d&eacute;cision finale de l&rsquo;autorit&eacute; irlandaise pouvait &ecirc;tre contest&eacute;e devant le juge national, seul habilit&eacute; &agrave; saisir la CJUE &agrave; titre pr&eacute;judiciel. En d&rsquo;autres termes, seule la d&eacute;cision finale de l&rsquo;autorit&eacute; irlandaise pouvait faire l&rsquo;objet d&rsquo;un recours.</p>
<p>WhatsApp s&rsquo;est donc pourvu devant la CJUE, qui a tranch&eacute; le 10 f&eacute;vrier 2026 en faveur de la recevabilit&eacute; en jugeant que</p>
<blockquote><p><em>&ldquo;</em><a href="https://eur-lex.europa.eu/legal-content/FR/TXT/HTML/?uri=CELEX:62023CJ0097_RES" rel="noopener noreferrer" target="_blank"><em>Saisie d&rsquo;un pourvoi introduit par WhatsApp Ireland Ltd (ci-apr&egrave;s &laquo;&nbsp;WhatsApp&nbsp;&raquo;), la Cour, si&eacute;geant en grande chambre, annule l&rsquo;ordonnance du Tribunal dans l&rsquo;affaire WhatsApp Ireland/Comit&eacute; europ&eacute;en de la protection des donn&eacute;es&hellip;</em></a><em>&rdquo;.</em></p></blockquote>
<p>Il en d&eacute;coule que les d&eacute;cisions du CEPD prises sur le fondement de l&rsquo;article 65 du RGPD constituaient bien des actes attaquables.</p>
<h4><strong>Notion d&rsquo;acte attaquable au sens de l&rsquo;article 263, premier alin&eacute;a, du TFUE&nbsp;</strong></h4>
<p>Dans son arr&ecirc;t du <strong>10 f&eacute;vrier 2026</strong>, la haute juridiction europ&eacute;enne <a href="https://eur-lex.europa.eu/legal-content/FR/TXT/HTML/?uri=CELEX:62023CJ0097_RES" rel="noopener noreferrer" target="_blank">vient pr&eacute;ciser la notion d&rsquo;acte attaquable</a>, en rappelant le r&ocirc;le du juge de l&rsquo;Union dans le contr&ocirc;le de la l&eacute;galit&eacute; des actes des organismes de l&rsquo;Union destin&eacute;s &agrave; produire des effets juridiques &agrave; l&rsquo;&eacute;gard des tiers. &Agrave; cet effet, le caract&egrave;re attaquable d&rsquo;un acte s&rsquo;appr&eacute;cie objectivement, par r&eacute;f&eacute;rence &agrave; son contenu ind&eacute;pendamment de l&rsquo;identit&eacute; du requ&eacute;rant. Il suffit donc que l&rsquo;acte produise des effets juridiques &agrave; l&rsquo;&eacute;gard de tiers, entendus comme toute personne distincte de son auteur, &agrave; savoir le CEPD.</p>
<p>En&nbsp;l&rsquo;esp&egrave;ce, la Cour a consid&eacute;r&eacute; que la d&eacute;cision litigieuse constitue un acte &eacute;manant d&rsquo;un organe de l&rsquo;Union, rev&ecirc;tant un caract&egrave;re contraignant &agrave; l&rsquo;&eacute;gard des tiers en ce qu&rsquo;elle s&rsquo;impose tant &agrave; l&rsquo;autorit&eacute; irlandaise appel&eacute;e &agrave; l&rsquo;adopter qu&rsquo;aux autorit&eacute;s de contr&ocirc;le concern&eacute;es.</p>
<blockquote><p><em>&ldquo;</em><em>Ainsi, la Cour constate que la d&eacute;cision litigieuse constitue un acte attaquable et que le Tribunal a commis une erreur de droit, d&rsquo;une part, en op&eacute;rant une confusion entre les exigences r&eacute;sultant, respectivement, des premier et quatri&egrave;me alin&eacute;as de l&rsquo;article&nbsp;263&nbsp;TFUE et, d&rsquo;autre part, en formulant un crit&egrave;re erron&eacute;, relatif &agrave; l&rsquo;absence d&rsquo;opposabilit&eacute; directe de l&rsquo;acte en cause &agrave; l&rsquo;&eacute;gard de WhatsApp, et en qualifiant la d&eacute;cision litigieuse de mesure interm&eacute;diaire d&eacute;nu&eacute;e d&rsquo;effets juridiques autonomes.&rdquo;</em></p></blockquote>
<h4><strong>Sur le bien-fond&eacute; du recours de WhatsApp contre la d&eacute;cision du CEPD devant le Tribunal de premi&egrave;re instance des Communaut&eacute;s europ&eacute;ennes</strong></h4>
<p>Pour qu&rsquo;il soit recevable, la Cour rappelle qu&rsquo;un recours en annulation introduit par une personne non-destinataire d&rsquo;un acte est subordonn&eacute; &agrave; la condition que cet acte la concerne directement et individuellement au sens des dispositions de l&rsquo;article 263, alin&eacute;a 4, du TFUE.</p>
<p>La haute juridiction ajoute que l&rsquo;affectation directe suppose la r&eacute;union de deux conditions, la mesure contest&eacute;e doit, d&rsquo;une part, produire directement des effets sur la situation juridique du requ&eacute;rant et, d&rsquo;autre part, ne laisser aucun pouvoir d&rsquo;appr&eacute;ciation aux autorit&eacute;s.</p>
<p><strong>S&rsquo;agissant de la premi&egrave;re condition&nbsp;</strong></p>
<blockquote><p><em>&ldquo;</em><em>En l&rsquo;esp&egrave;ce, la Cour constate que, le Comit&eacute; ayant d&eacute;cid&eacute; notamment que WhatsApp avait m&eacute;connu certaines dispositions du RGPD, la d&eacute;cision litigieuse modifie la situation juridique de cette entreprise, celle-ci &eacute;tant en particulier amen&eacute;e, en raison de l&rsquo;intervention du Comit&eacute;, &agrave; modifier sa relation contractuelle avec les utilisateurs du service de messagerie fourni par elle. Partant, il existe un lien direct entre cette d&eacute;cision et ses effets sur la situation de WhatsApp.&rdquo;</em></p></blockquote>
<p><em>&nbsp;</em><strong>S&rsquo;agissant de la deuxi&egrave;me condition</strong></p>
<blockquote><p><em>&ldquo;</em><em>Dans ce contexte, la Cour rappelle que la d&eacute;cision litigieuse lie l&rsquo;autorit&eacute; de contr&ocirc;le chef de file et les autorit&eacute;s de contr&ocirc;le concern&eacute;es, celles-ci ne pouvant pas s&rsquo;&eacute;carter de la position retenue par le Comit&eacute; dans cette d&eacute;cision. En effet, ladite d&eacute;cision tranche les questions de droit faisant l&rsquo;objet de la saisine de celui-ci et lie inconditionnellement ces autorit&eacute;s, s&rsquo;agissant en particulier du constat de violation de certaines dispositions du RGPD, de la qualification des donn&eacute;es soumises &agrave; une compression avec perte en tant que donn&eacute;es &agrave; caract&egrave;re personnel ainsi que de l&rsquo;obligation de rehausser le montant des amendes envisag&eacute;es. Lesdites autorit&eacute;s n&rsquo;ont pas la possibilit&eacute; de modifier le r&eacute;sultat des appr&eacute;ciations port&eacute;es, pour ce qui est de ces questions, par le Comit&eacute;.&nbsp;&rdquo;</em></p></blockquote>
<p>&nbsp;</p>
<p>La Cour en d&eacute;duit que WhatsApp est directement concern&eacute; par la d&eacute;cision litigieuse.</p>
<blockquote><p><em>&ldquo;</em><em>Par cons&eacute;quent, la Cour conclut que WhatsApp est directement concern&eacute;e par la d&eacute;cision litigieuse.&nbsp;&rdquo;&nbsp;</em></p></blockquote>
<h4><strong>Port&eacute;e de l&rsquo;arr&ecirc;t&nbsp;</strong></h4>
<p>En jugeant que les entreprises peuvent d&eacute;sormais &ecirc;tre consid&eacute;r&eacute;es comme directement concern&eacute;es par les d&eacute;cisions du CEPD m&ecirc;me si celles-ci ne sont pas formellement adress&eacute;es &agrave; elles, reconnaissant qu&rsquo;elles peuvent introduire des recours en annulation contre ces d&eacute;cisions sans avoir &agrave; saisir pr&eacute;alablement les juridictions nationales, la haute juridiction europ&eacute;enne consacre ainsi un pr&eacute;c&eacute;dent judiciaire ouvrant la voie &agrave; un contr&ocirc;le juridictionnel des actes du Comit&eacute;, ce qui pourrait encourager davantage de recours, en particulier de la part des grands acteurs du num&eacute;rique.</p>
<p>L&rsquo;&eacute;volution du contentieux en pr&eacute;cisera les contours.</p>
<p>Affaire &agrave; suivre&nbsp;!</p>]]></content>
	<updated>2026-03-04T16:37:04+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-04T16:37:04+00:00</updated>
		<title>Vincent Gautrais</title></source>

	<category term="cours"/>

	<category term="mes étudiant-e-s"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-04:/281600</id>
	<link href="https://www.gautrais.com/blogue/2026/03/04/vie-privee-et-data-brokers-le-cas-californien/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=vie-privee-et-data-brokers-le-cas-californien" rel="alternate" type="text/html"/>
	<title type="html">Vie privée et Data Brokers: le cas californien</title>
	<summary type="html"><![CDATA[<p>Aziz Fatnassi est &eacute;tudiant dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)
Intr...</p>]]></summary>
	<content type="html"><![CDATA[<p><strong><a href="https://www.gautrais.com/files/sites/185/2026/03/2025.03_AzizCarre-225x225-1.jpg" rel="noopener noreferrer" target="_blank"><img decoding="async" src="https://www.gautrais.com/files/sites/185/2026/03/2025.03_AzizCarre-225x225-1.jpg" alt="" referrerpolicy="no-referrer" loading="lazy"></a>Aziz Fatnassi est &eacute;tudiant dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)</strong></p>
<h4><b><i><span lang="FR">Introduction&nbsp;: la Californie, le laboratoire &eacute;tasunien en mati&egrave;re de protection des donn&eacute;es personnelles</span></i></b></h4>
<p><span lang="FR">Au sud de la baie de San Francisco, sur les comt&eacute;s de Santa Clara et de San Mateo en Californie, s&rsquo;&eacute;tend la Silicon Valley, &eacute;picentre mondial de l&rsquo;innovation technologique. Sur ce territoire relativement restreint d&rsquo;environ 200 km&sup2;, soit deux fois la surface de la ville de Paris, les grandes firmes industrielles historiques comme Hewlett-Packard, Intel et Apple c&ocirc;toient les g&eacute;ants de l&rsquo;&eacute;conomie num&eacute;rique, au premier rang desquels figurent Google, Meta, Yahoo, LinkedIn, Twitter, PayPal et Netflix.</span></p>
<p><span lang="FR">Berceau de la haute technologie et de l&rsquo;innovation mondiale, la Californie ne cesse pourtant de prendre les devants en mati&egrave;re de protection de donn&eacute;es personnelles depuis le </span><span lang="FR">scandale </span><span lang="FR"><a href="https://www.nexa.fr/blog/quest-ce-que-le-scandale-cambridge-analytica" rel="noopener noreferrer" target="_blank"><span>Facebook/Cambridge Analytica</span></a></span><span lang="FR">, un choix qui se traduit par l&rsquo;entr&eacute;e en vigueur au 1&#7497;&#691; janvier 2020 du </span><span lang="FR"><a href="https://cppa.ca.gov/regulations/pdf/ccpa_statute.pdf" rel="noopener noreferrer" target="_blank"><span>California Consumer Privacy Act</span></a></span><span lang="FR">, une version &eacute;tatsunienne du </span><span lang="FR"><a href="https://eur-lex.europa.eu/eli/reg/2016/679/oj?locale=fr" rel="noopener noreferrer" target="_blank"><span>R&egrave;glement europ&eacute;en de protection des donn&eacute;es (RGPD)</span></a></span><span lang="FR">. &nbsp;&nbsp;&nbsp;Premi&egrave;re du genre aux &Eacute;tats-Unis, elle impose aux g&eacute;ants du Net, dont le mod&egrave;le &eacute;conomique repose sur la collecte et l&rsquo;exploitation commerciale des donn&eacute;es personnelles de leurs utilisateurs, des obligations de transparence quant aux types de donn&eacute;es qu&rsquo;ils collectent, ainsi que l&rsquo;obligation de permettre aux personnes concern&eacute;es de s&rsquo;opposer &agrave; une telle exploitation.</span></p>
<p><span lang="FR">Dans la droite lign&eacute;e de son engagement en faveur d&rsquo;un renforcement du contr&ocirc;le des consommateurs californiens sur leurs donn&eacute;es personnelles, le l&eacute;gislateur a entendu leur conf&eacute;rer le droit de supprimer leurs donn&eacute;es collect&eacute;es par les plateformes num&eacute;riques, tout en en rationalisant le processus de mise en &oelig;uvre. Ce choix s&rsquo;est concr&eacute;tis&eacute; avec l&rsquo;adoption en 2023 du Senate Bill SB-362, un projet de loi connu sous le nom de </span><span lang="FR"><a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB362" rel="noopener noreferrer" target="_blank"><span>Delete Act</span></a></span><span lang="FR">, en vertu duquel la California Privacy Protection Agency</span> <span lang="FR">(ci-apr&egrave;s CPPA), l&rsquo;Agence californienne de protection de la vie priv&eacute;e, devrait mettre en place, au plus tard le 1&#7497;&#691; janvier 2026, un moyen permettant aux consommateurs de </span><span lang="FR"><a href="https://calmatters.digitaldemocracy.org/bills/ca_202320240sb362" rel="noopener noreferrer" target="_blank"><span>demander l&rsquo;effacement de leurs donn&eacute;es personnelles</span></a></span><span lang="FR">, en une seule demande.</span></p>
<p><span lang="FR">C&rsquo;est dans le prolongement de cette d&eacute;marche qu&rsquo;a &eacute;t&eacute; cr&eacute;&eacute;e la &laquo;&nbsp;</span><span lang="FR"><a href="https://privacy.ca.gov/drop/" rel="noopener noreferrer" target="_blank"><span>Delete Request and Opt-out Platform</span></a></span><span lang="FR">&nbsp;&raquo; (ci-apr&egrave;s DROP), une plateforme gratuite gr&acirc;ce &agrave; laquelle les consommateurs peuvent d&eacute;sormais, dans le cadre d&rsquo;une seule proc&eacute;dure, exiger la suppression de leurs donn&eacute;es personnelles aupr&egrave;s de plus de 500 courtiers en donn&eacute;es.</span></p>
<h4><b><i><span lang="FR">Sur les courtiers en donn&eacute;es personnelles souffle le vent du Delete Act&nbsp;</span></i></b></h4>
<p><span lang="FR">Les controverses r&eacute;centes li&eacute;es au recours par </span><span lang="FR"><a href="https://www.forbes.com/sites/rogerdooley/2025/07/17/will-delta-airlines-ai-pricing-trigger-a-customer-trust-crisis/" rel="noopener noreferrer" target="_blank"><span>certaines entreprises</span></a></span><span lang="FR"> aux </span><span lang="FR"><a href="https://bureau-concurrence.canada.ca/fr/comment-nous-favorisons-concurrence/education-sensibilisation/publications/tarification-algorithmique-concurrence-document-travail" rel="noopener noreferrer" target="_blank"><span>algorithmes de tarification dynamique</span></a></span><span lang="FR"> pour ajuster les prix, tout comme les pratiques d&rsquo;achat et d&rsquo;exploitation de donn&eacute;es personnelles des utilisateurs de </span><span lang="FR"><a href="https://themarkup.org/privacy/2022/01/27/gay-bi-dating-app-muslim-prayer-apps-sold-data-on-peoples-location-to-a-controversial-data-broker" rel="noopener noreferrer" target="_blank"><span>certaines applications</span></a></span><span lang="FR">, montrent l&rsquo;ampleur prise par le march&eacute; de courtage de donn&eacute;es et rappellent combien leur protection constitue d&eacute;sormais </span><span lang="FR"><a href="https://www.theguardian.com/us-news/2020/dec/03/aclu-seeks-release-records-data-us-collected-via-muslim-app-used-millions" rel="noopener noreferrer" target="_blank"><span>un enjeu central</span></a></span><span lang="FR">.</span></p>
<p><span lang="FR">Aux termes de l&rsquo;article 1798.99.80 du </span><span lang="FR"><a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB362" rel="noopener noreferrer" target="_blank"><span>Delete Act</span></a></span><span lang="FR">, qui reprend &agrave; l&rsquo;identique la d&eacute;finition du </span><span lang="FR"><a href="https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&amp;division=3.&amp;title=1.81.48.&amp;part=4.&amp;chapter=&amp;article=" rel="noopener noreferrer" target="_blank"><span>California Civil Code</span></a></span><span lang="FR">, est un courtier en donn&eacute;es<i> &laquo;&nbsp;une entreprise qui collecte et vend sciemment &agrave; des tiers les informations personnelles d&rsquo;un consommateur avec lequel l&rsquo;entreprise n&rsquo;a pas de relation directe&nbsp;&raquo;. </i>Il en d&eacute;coule que la notion de courtiers en donn&eacute;es renvoie &agrave; une entreprise qui commercialise &agrave; des tiers des informations personnelles collect&eacute;es aupr&egrave;s d&rsquo;autres entreprises, concernant des personnes avec lesquelles elle n&rsquo;entretient aucun lien direct.<b></b></span></p>
<p><span lang="FR">Le </span><span lang="FR"><a href="https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&amp;division=3.&amp;title=1.81.48.&amp;part=4.&amp;chapter=&amp;article=" rel="noopener noreferrer" target="_blank"><span>California Civil Code</span></a></span><span lang="FR"> pr&eacute;voyait d&eacute;j&agrave; l&rsquo;obligation, pour les courtiers en donn&eacute;es, de s&rsquo;inscrire aupr&egrave;s de la CPPA, selon les modalit&eacute;s pr&eacute;vues &agrave; l&rsquo;article 1798.99.82. L&rsquo;apport du </span><span lang="FR"><a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB362" rel="noopener noreferrer" target="_blank"><span>Delete Act</span></a></span><span lang="FR"> tient donc moins &agrave; l&rsquo;enregistrement en tant que tel qu&rsquo;&agrave; l&rsquo;obligation faite aux courtiers en donn&eacute;es de r&eacute;pondre aux demandes de suppression toutes les 45 jours, telle que pr&eacute;vue &agrave; l&rsquo;article 1798,99,86 (C)(1), sous peine de sanctions. Pour veiller au respect des nouvelles r&egrave;gles en place, le paragraphe (e)(1) du m&ecirc;me article introduit un m&eacute;canisme de surveillance assur&eacute;e par des tiers, dont la mise en &oelig;uvre interviendra en 2028 et sera renouvel&eacute;e tous les trois ans.</span></p>
<h4><strong><em>Fonctionnement de la &laquo;&nbsp;Delete Request and Opt-out Platform&nbsp;&raquo;&nbsp;</em></strong></h4>
<p><span lang="FR">La plateforme DROP permet au consommateur d&rsquo;adresser, par une seule soumission, une demande de suppression de ses donn&eacute;es personnelles &agrave; l&rsquo;ensemble des courtiers enregistr&eacute;s, ainsi que, le cas &eacute;ch&eacute;ant, une demande d&rsquo;opposition &agrave; leur vente ou &agrave; leur partage futur. Les courtiers sont ensuite tenus de consulter r&eacute;guli&egrave;rement la plateforme et de traiter les demandes dans les d&eacute;lais fix&eacute;s par la loi.</span></p>
<p><span lang="FR">Avant de formuler sa demande, le consommateur doit pr&eacute;alablement s&rsquo;inscrire &agrave; la plateforme. &Agrave; cette fin, il lui appartient d&rsquo;attester de sa r&eacute;sidence en Californie en communiquant certaines informations d&rsquo;identification, notamment son nom, sa date de naissance et son adresse. Ces donn&eacute;es sont ensuite transmises, sous forme hach&eacute;e, aux courtiers en donn&eacute;es concern&eacute;s afin de permettre le traitement de la demande.</span></p>
<p><span lang="FR">Le consommateur peut, le cas &eacute;ch&eacute;ant, compl&eacute;ter son profil en ajoutant d&rsquo;autres informations compl&eacute;mentaires susceptibles de faciliter l&rsquo;identification de ses donn&eacute;es, telles qu&rsquo;un num&eacute;ro de t&eacute;l&eacute;phone, avant d&rsquo;adresser sa demande soit &agrave; l&rsquo;ensemble des courtiers inscrits, soit &agrave; une s&eacute;lection d&rsquo;entre eux, et d&rsquo;en suivre l&rsquo;&eacute;volution depuis son espace personnel.</span></p>
<p><span lang="FR">En offrant aux consommateurs un &laquo;&nbsp;guichet unique&nbsp;&raquo; qui leur permet de centraliser l&rsquo;exercice de leur droit d&rsquo;exiger la suppression de leurs donn&eacute;es personnelles aupr&egrave;s d&rsquo;une pluralit&eacute; de courtiers en donn&eacute;es souvent difficilement identifiables, la plateforme DROP r&eacute;pond &agrave; un enjeu d&rsquo;accessibilit&eacute; et de rationalisation des proc&eacute;dures.</span></p>
<h4><b><i><span lang="FR">Suppression, d&eacute;sindexation, effacement, oubli, une perspective de droit compar&eacute;&nbsp;</span></i></b></h4>
<p><span lang="FR">Au Qu&eacute;bec, le droit &agrave; la suppression des renseignements personnels est consacr&eacute; &agrave; </span><span lang="FR">l&rsquo;article 40 du&nbsp;</span><span lang="FR"><a href="https://www.legisquebec.gouv.qc.ca/fr/document/lc/ccq-1991?langCont=en#se:40" rel="noopener noreferrer" target="_blank"><span>Code civil du Qu&eacute;bec</span></a></span><span lang="FR">,&nbsp;au visa duquel &laquo;&nbsp;<i>Toute personne peut faire corriger, dans un dossier qui la concerne, des renseignements inexacts, incomplets ou &eacute;quivoques&nbsp;; elle peut aussi faire supprimer un renseignement p&eacute;rim&eacute; ou non justifi&eacute; par l&rsquo;objet du dossier, ou formuler par &eacute;crit des commentaires et les verser au dossier&hellip;&nbsp;&raquo;</i> et le droit &agrave; la d&eacute;sindexation &agrave; l&rsquo;article&nbsp;28.1 de la&nbsp;</span><span lang="FR"><a href="https://www.legisquebec.gouv.qc.ca/fr/document/lc/p-39.1" rel="noopener noreferrer" target="_blank"><span>Loi sur la protection des renseignements personnels dans le secteur priv&eacute;&nbsp;(LPRPSP)</span></a></span><span lang="FR">, dont le premier paragraphe dispose que <i>&laquo;&nbsp;<span>La personne concern&eacute;e par un renseignement personnel peut exiger d&rsquo;une personne qui exploite une entreprise qu&rsquo;elle cesse la diffusion de ce renseignement ou que soit d&eacute;sindex&eacute; tout hyperlien rattach&eacute; &agrave; son nom permettant d&rsquo;acc&eacute;der &agrave; ce renseignement par un moyen technologique, lorsque la diffusion de ce renseignement contrevient &agrave; la loi ou &agrave; une ordonnance judiciaire</span>&nbsp;&raquo;</i>. Ces deux droits ne recouvrent toutefois pas le m&ecirc;me objet, la suppression visant le renseignement lui-m&ecirc;me, tandis que la d&eacute;sindexation n&rsquo;affecte que sa visibilit&eacute; en ligne.</span></p>
<p><span lang="FR">Outre-Atlantique, le premier paragraphe de l&rsquo;</span><span lang="FR"><a href="https://gdpr-text.com/fr/read/article-17/" rel="noopener noreferrer" target="_blank"><span>article 17 du RGPD</span></a></span><span lang="FR"> au titre duquel la personne concern&eacute;e est en droit&nbsp;&laquo;&nbsp;<i>d&rsquo;obtenir du responsable du traitement l&rsquo;effacement, dans les meilleurs d&eacute;lais, de donn&eacute;es &agrave; caract&egrave;re personnel la concernant et le responsable du traitement a l&rsquo;obligation d&rsquo;effacer ces donn&eacute;es &agrave; caract&egrave;re personnel dans les meilleurs d&eacute;lais&nbsp;&raquo;.</i> Il en d&eacute;coule que la personne concern&eacute;e peut exiger l&rsquo;effacement de donn&eacute;es &agrave; caract&egrave;re personnel la concernant dans les cas pr&eacute;vus &agrave; l&rsquo;article 17, notamment &agrave; la suite de l&rsquo;exercice du droit d&rsquo;opposition, lorsque les donn&eacute;es ont fait l&rsquo;objet d&rsquo;un traitement illicite ou encore afin de respecter une obligation l&eacute;gale issue du droit de l&rsquo;Union ou du droit de l&rsquo;&Eacute;tat membre auquel le responsable du traitement est soumis.</span></p>
<p><span lang="FR">Ce droit &agrave; la suppression des donn&eacute;es est toutefois souvent confondu avec le droit &agrave; l&rsquo;oubli, alors que ces deux notions ne se situent pas au m&ecirc;me rang juridique.</span></p>
<p><span lang="FR">&Eacute;voqu&eacute; pour la premi&egrave;re fois en 1966 par </span><span lang="FR"><a href="https://docassas.u-paris2.fr/nuxeo/site/esupversions/ef25f216-071c-4460-ab75-6cdcc161a5a4?inline" rel="noopener noreferrer" target="_blank"><span>G&eacute;rard Lyon-Caen</span></a></span><span lang="FR"> dans un commentaire d&rsquo;un jugement du Tribunal de grande instance de la Seine, le droit &agrave; l&rsquo;oubli renvoyait alors &agrave; l&rsquo;id&eacute;e que certains faits, devenus anciens, ne devaient plus &ecirc;tre rappel&eacute;s en justice, une conception inspir&eacute;e de la prescription de l&rsquo;action publique, fond&eacute;e sur la logique selon laquelle, pass&eacute; un certain d&eacute;lai, il n&rsquo;est plus n&eacute;cessaire de rappeler en justice les crimes dont les effets ont disparu.</span></p>
<p><span lang="FR">Le droit &agrave; l&rsquo;oubli est en effet plus large et l&rsquo;effacement n&rsquo;en constitue qu&rsquo;une modalit&eacute;. En ce sens, l&rsquo;oubli peut &ecirc;tre assur&eacute; autrement que par la suppression des donn&eacute;es, notamment par le droit au d&eacute;r&eacute;f&eacute;rencement, d&eacute;fini par la Cour de justice de l&rsquo;Union europ&eacute;enne dans l&rsquo;arr&ecirc;t&nbsp;</span><span lang="FR"><a href="https://eur-lex.europa.eu/legal-content/FR/ALL/?uri=CELEX%3A62012CJ0131" rel="noopener noreferrer" target="_blank"><i><span>Google Spain</span></i><span>&nbsp;</span></a></span><span lang="FR">comme le droit &agrave; ce que l&rsquo;information &laquo;&nbsp;<i>relative &agrave; sa personne ne soit plus [&hellip;] li&eacute;e &agrave; son nom par une liste de r&eacute;sultats affich&eacute;e &agrave; la suite d&rsquo;une recherche effectu&eacute;e &agrave; partir de son nom [&hellip;] </i>&raquo;,&nbsp;dans une logique assez proche du droit &agrave; la d&eacute;sindexation pr&eacute;vu au Qu&eacute;bec.</span></p>
<p><span lang="FR">Le droit &agrave; la d&eacute;sindexation pourrait, &agrave; premi&egrave;re vue, &ecirc;tre assimil&eacute; au droit &agrave; l&rsquo;effacement. Il n&rsquo;en est pourtant rien, puisqu&rsquo;il ne vise pas &agrave; supprimer l&rsquo;information du site source, qui reste accessible par d&rsquo;autres crit&egrave;res de recherche, mais &agrave; emp&ecirc;cher qu&rsquo;elle apparaisse dans les r&eacute;sultats du moteur de recherche lorsqu&rsquo;une requ&ecirc;te est effectu&eacute;e au nom de la personne.</span></p>
<p><span lang="FR">Toutefois, faute d&rsquo;une mise en &oelig;uvre centralis&eacute;e aussi bien au Qu&eacute;bec qu&rsquo;en Europe, l&rsquo;exercice de ces droits par les personnes concern&eacute;es demeure fragment&eacute;.</span></p>
<h4><b><i><span lang="FR">Alea jacta est, une r&eacute;forme audacieuse aux effets limit&eacute;s&nbsp;</span></i></b></h4>
<p><span lang="FR">Malgr&eacute; des avanc&eacute;es notables, la r&eacute;ponse californienne aux enjeux du march&eacute; des services de courtage de donn&eacute;es n&rsquo;est toutefois pas exempte de limites et gagnerait &agrave; &ecirc;tre renforc&eacute;e, ce que les d&eacute;veloppements suivants entendent mettre en &eacute;vidence en s&rsquo;attachant plus particuli&egrave;rement au caract&egrave;re largement discr&eacute;tionnaire du dispositif et &agrave; sa port&eacute;e territoriale restreinte.</span></p>
<h4><b><i><span lang="FR">Une effectivit&eacute; laiss&eacute;e au bon vouloir des courtiers en donn&eacute;es&nbsp;</span></i></b></h4>
<p><span lang="FR">Le m&eacute;canisme repose sur l&rsquo;enregistrement des courtiers &agrave; la plateforme DROP, condition n&eacute;cessaire pour que les consommateurs puissent exercer leur droit &agrave; la suppression. Le </span><span lang="FR"><a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB362" rel="noopener noreferrer" target="_blank"><span>Delete Act</span></a></span><span lang="FR">&nbsp;pr&eacute;voit &agrave; cet effet une amende de 200 dollars par jour en cas de d&eacute;faut d&rsquo;enregistrement, en plus des co&ucirc;ts d&rsquo;enqu&ecirc;te engag&eacute;s par la CPPA. Une fois enregistr&eacute;s, les courtiers doivent en outre consulter la plateforme et traiter les demandes dans les d&eacute;lais l&eacute;gaux, sous peine d&rsquo;une amende identique en cas de non-suppression.</span></p>
<p><span lang="FR">Cependant, compte tenu du poids &eacute;conomique de l&rsquo;industrie des services de courtage de donn&eacute;es, ces sanctions nous semblent relativement limit&eacute;es, de sorte que l&rsquo;effectivit&eacute; du dispositif semble d&eacute;pendre, dans une large mesure, du bon vouloir des courtiers eux-m&ecirc;mes.</span></p>
<h4><b><i><span lang="FR">Un champ d&rsquo;application ratione loci limit&eacute;&nbsp;</span></i></b></h4>
<p><span lang="FR">Adopt&eacute; dans le cadre d&rsquo;une loi &eacute;tatique, et non f&eacute;d&eacute;rale, le dispositif DROP ne s&rsquo;impose qu&rsquo;aux courtiers soumis au droit californien, ce qui restreint consid&eacute;rablement son champ d&rsquo;application territorial alors m&ecirc;me que le march&eacute; des services de courtage de donn&eacute;es se d&eacute;ploie dans un environnement num&eacute;rique transnational.</span></p>
<p><span lang="FR">D&egrave;s lors, le dispositif californien issu du Delete Act ne constitue pas tant un aboutissement qu&rsquo;une &eacute;tape dans une r&eacute;flexion encore en construction sur les conditions d&rsquo;effectivit&eacute; du droit &agrave; l&rsquo;effacement. Reste &agrave; savoir si cette r&eacute;flexion trouvera demain un cadre juridique &agrave; la mesure des mutations qu&rsquo;elle r&eacute;v&egrave;le. Une telle r&eacute;flexion semble devoir se d&eacute;ployer dans un contexte plus large, marqu&eacute; par l&rsquo;essor de l&rsquo;intelligence artificielle g&eacute;n&eacute;rative. </span><span lang="FR"><a href="https://lexelectronica.openum.ca/files/sites/103/Lex_vol30no1_M_Aziz_Fatnassi.pdf" rel="noopener noreferrer" target="_blank"><span>Elle est g&eacute;n&eacute;rative dans la mesure o&ugrave; elle permet de &laquo;&nbsp;g&eacute;n&eacute;rer&nbsp;&raquo;, &agrave; partir de requ&ecirc;tes textuelles, des objets num&eacute;riques tels que des textes, des images, des sons, des vid&eacute;os, ou encore des fichiers. Ces productions suscitent tout autant l&rsquo;&eacute;merveillement</span></a></span> <span><span lang="FR">que les d&eacute;bats.</span></span></p>
<p><span lang="FR"><a href="https://www.lescpi.ca/articles/v30/n3/intelligence-artificielle-et-droit-dauteur-lhypothese-dun-domaine-public-par-defaut/" rel="noopener noreferrer" target="_blank"><span>Voil&agrave; une interrogation laiss&eacute;e &agrave; la doctrine du futur, sans trop savoir qui, entre l&rsquo;humain, l&rsquo;IA ou une hybridation des deux, la r&eacute;digera.</span></a></span></p>]]></content>
	<updated>2026-03-04T16:19:51+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-03-04T16:19:51+00:00</updated>
		<title>Vincent Gautrais</title></source>

	<category term="cours"/>

	<category term="mes étudiant-e-s"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-03-02:/281345</id>
	<link href="https://law.stanford.edu/2026/03/01/trust-without-teeth-the-eu-ai-act-healthcare-and-the-limits-of-a-voluntary-bill-of-rights/" rel="alternate" type="text/html"/>
	<title type="html">Trust Without Teeth: The EU AI Act, Healthcare, and the Limits of a Voluntary Bill of Rights</title>
	<summary type="html"><![CDATA[<p>In The European Union&rsquo;s Artificial Intelligence Act and Trust: Towards an AI Bill of Rights in...</p>]]></summary>
	<content type="html"><![CDATA[<p>In <em>The European Union&rsquo;s Artificial Intelligence Act and Trust: Towards an AI Bill of Rights in Healthcare?</em>, 17 Law, Innovation &amp; Tech. 318 (2025) Barry Solaiman notes his AI Bill of Rights proposal &ldquo;is intended to encourage debate.&rdquo; This analysis is offered in that light.</p>
<p>I review Solaiman&rsquo;s proposal through two prisms. The first is the AI Life Cycle Core Principles (AILCCP). Now in its third year of development, the AILCCP is a comprehensive framework containing 37 principles organized across 10 pillars, supported by 48 controls, and mapped to 10 life cycle phases. It offers a structured methodology for building, deploying, and operating AI systems that are defensible, compliant, and aligned with organizational, regulatory, and societal expectations. Each AILCCP principle, pillar, and life cycle phase is a formally defined concept enriched with objectives, rationale, key questions, controls, evidence requirements, and life cycle guidance, hence their capitalization as defined terms. Second, the Procedural Self-Consumption (PSC) lens. This is a legislative review framework that identifies seven diagnostic patterns through which technology legislation produces process rather than governance outcomes.</p>
<h4>I. Procedural Self-Consumption Analysis</h4>
<h5>The AI Act Through the PSC Lens</h5>
<p>Solaiman critiques the AI Act for inadequate trust-building but does not systematically examine whether the Act&rsquo;s operative provisions produce governance outcomes or merely generate process. Applying the PSC framework to the provisions that Solaiman discusses reveals patterns he identifies intuitively but does not name. The PSC framework contains seven diagnostic patterns. Four apply to the material Solaiman addresses.</p>
<p><strong>&nbsp;</strong><strong>Pattern 1 (Procedural Self-Consumption): </strong>Article 95, one of only two articles in the AI Act that mention &ldquo;trust,&rdquo; creates an obligation to <em>encourage development of</em> voluntary codes. Not to adopt them. Not to enforce them. The provision generates further process (code development) rather than governance outcomes (binding standards). The AI Act builds trust by asking someone else to build trust later.</p>
<p><strong>&nbsp;</strong><strong>Pattern 2 (Unanimity Without Convergence): </strong>Recital 27 invokes seven non-binding principles from the High-Level Expert Group on AI (AI HLEG) Guidelines, which form the basis of the Act&rsquo;s trust conception. &ldquo;Human agency and oversight&rdquo; and &ldquo;diversity, non-discrimination and fairness&rdquo; are not operative standards. They are consensus placeholders. The principles achieve unanimity precisely because they are undefined. Solaiman notes that this basis is &ldquo;not captured in the resulting law&rdquo; but does not frame the observation as a diagnostic pattern, which limits its explanatory force.</p>
<p><strong>&nbsp;</strong><strong>Pattern 6 (Procedural Perfectionism): </strong>The stacked conformity assessment regime Solaiman describes, where Medical Device Regulation (MDR) conformity is followed by AI Act conformity, each with its own procedural prerequisites, illustrates how individually defensible steps accumulate into disabling sequences. Each step is justified. The sequence delays governance to the point of irrelevance for a technology whose innovation cycle outpaces the regulatory pipeline.</p>
<p><strong>&nbsp;</strong><strong>Pattern 7 (Meta-Regulatory Irony): </strong>The AI Act creates notified bodies whose fee-for-service incentive structure, as Solaiman documents, may align them with the industry they audit rather than the public they protect. The trust-building mechanism reproduces the trust deficit it was designed to remedy.</p>
<h5>Solaiman&rsquo;s Own Proposal Through the PSC Lens</h5>
<p>Solaiman&rsquo;s proposed &ldquo;AI Bill of Rights for healthcare&rdquo; is itself vulnerable to the patterns he diagnoses.</p>
<p><strong>&nbsp;</strong><strong>Pattern 1 again: </strong>Solaiman proposes a voluntary code within the AI Act&rsquo;s framework. A voluntary code creates no enforceable obligation. The test is straightforward: if every healthcare institution in the EU faithfully adopted Solaiman&rsquo;s Bill of Rights, what would change? The existence of a document, not the existence of accountability. Patients would possess a charter. They would not possess a cause of action.</p>
<p><strong>&nbsp;</strong><strong>Pattern 2 again: </strong>The Bill of Rights would enshrine values for trust, including consent, medical liability, data accuracy, privacy, bias, security, efficacy, safety, and transparency. Solaiman does not define what compliance with these values looks like for any of them. The very unanimity problem he diagnoses in the AI Act, principles endorsed without operative content, reappears in his proposed remedy.</p>
<p><strong>&nbsp;</strong><strong>Timeline Projection: </strong>Solaiman&rsquo;s proposal begins as a voluntary code. He envisions incorporation into national patient rights charters. In the EU&rsquo;s legislative architecture, this means: (1) the AI Act encourages codes (already enacted); (2) someone drafts a healthcare-specific Bill of Rights (unspecified timeline); (3) EU member states choose whether to incorporate it into national frameworks (optional, no deadline); (4) national implementation varies by healthcare system (unbounded). The minimum elapsed time from enactment to first enforceable patient-facing obligation is functionally indefinite.</p>
<h4>II. AILCCP Principle Mapping</h4>
<h5>Scoped Principle Set</h5>
<p>Given the paper&rsquo;s focus on healthcare AI, trust, the EU AI Act, and the doctor-patient relationship, the following AILCCP pillars and principles are contextually relevant:</p>
<p><strong>Transparency &amp; Explainability: </strong>Transparency, Explainability (XAI), Accessibility</p>
<p><strong>Oversight &amp; Accountability: </strong>Accountability, Governance, Metrics, Track Record</p>
<p><strong>Reliability &amp; Robustness: </strong>Accuracy, Trustworthy, Reliability</p>
<p><strong>Fairness &amp; Equity: </strong>Bias, Equity</p>
<p><strong>Privacy &amp; Consent: </strong>Consent, Privacy</p>
<p><strong>Safety &amp; Security: </strong>Safety, Security</p>
<p><strong>Ethics: </strong>Ethics, Fundamental Rights</p>
<p><strong>Human-Centered &amp; Workforce: </strong>Human-Centered</p>
<p><strong>Data &amp; Process: </strong>Data Stewardship</p>
<p>Excluded from scope: R&amp;D, Efficiency, Sustainable, Workforce Compatible, Wherewithal, Permit, Resilience, Robust. While organizational capability principles (Wherewithal, Sustainable) bear some relevance to healthcare institutions deploying AI, Solaiman&rsquo;s paper does not engage institutional capacity, making their inclusion forced.</p>
<h5>Principles Advanced</h5>
<p><strong>Transparency and Explainability (XAI): </strong>Solaiman&rsquo;s discussion of informed consent and the black box problem in Section 4.2 (pp. 331-332) engages these AILCCP principles directly. His call for &ldquo;meaningful disclosure of information that experts can access&rdquo; maps to the AILCCP requirement for audience-appropriate explanations. The treatment remains at the level of aspiration rather than specification. He acknowledges that explainability is &ldquo;easier said than done&rdquo; and that post hoc rationalizations may not illuminate inner workings, but he does not specify what &ldquo;meaningful disclosure&rdquo; operationally requires.</p>
<p><strong>Consent: </strong>The paper&rsquo;s treatment of informed consent as a trust-building mechanism in healthcare, drawing on Hall&rsquo;s &ldquo;predicated&rdquo; and &ldquo;supportive&rdquo; stances toward trust (p. 321), engages a recursive relationship the AI Act does not address. Consent is both <em>derived from</em> and <em>constitutive of</em> trust in healthcare settings. The AI Act treats consent as a downstream disclosure obligation. Solaiman&rsquo;s framing treats it as a structural precondition.</p>
<p><strong>Privacy and Data Stewardship: </strong>Solaiman identifies the deidentification/reidentification problem and notes that patients do not trust governance systems to maintain confidentiality (p. 332). His reference to the European Health Data Space (EHDS) as a potential solution engages Data Stewardship but does not assess whether the EHDS itself operationalizes the principle or merely invokes it.</p>
<p><strong>Bias: </strong>Solaiman&rsquo;s discussion of training data bias affecting demographic accuracy engages this principle, though his treatment is cursory. He calls for &ldquo;additional checks and verification&rdquo; (p. 332) without specifying what form those checks would take, who would perform them, or what standards would govern their execution.</p>
<p><strong>Accountability: </strong>The paper&rsquo;s discussion of liability gaps, the withdrawn Artificial Intelligence Liability Directive (AILD), and the uncertainty of the revised Product Liability Directive (PLD) engages Accountability (p. 332). Solaiman&rsquo;s proposal for a &ldquo;human point of reference&rdquo; and standing committees to examine AI incidents is more operationally concrete than the paper&rsquo;s other suggestions, though it still lacks enforcement architecture.</p>
<h5>Principles Engaged but Not Operationalized</h5>
<p><strong>Safety: </strong>Solaiman asserts in Section 4.2 that AI outputs should be &ldquo;accurate and safe for the context in which they are applied&rdquo; (p. 333). The AILCCP principle of Safety demands more than aspirational assertions. It requires specified testing protocols, defined performance thresholds, and continuous monitoring mechanisms. None appear here.</p>
<p><strong>&nbsp;</strong><strong>Trustworthy: </strong>The entire article circles this principle without landing on it. The AILCCP principle requires that an AI system demonstrates, through verifiable evidence, that it warrants confidence. Solaiman&rsquo;s critique of the AI Act is that it treats risk classification as a proxy for Trustworthiness. His proposed Bill of Rights substitutes a different set of aspirational values without specifying how Trustworthiness would be verified. The proxy changes. The absence of verification does not.</p>
<h5>Principles Neglected</h5>
<p><strong>Metrics: </strong>The paper&rsquo;s most consequential omission. Trust is not simply a relational concept. It can be measured, benchmarked, and tracked. The AILCCP principle of Metrics requires defined indicators for system performance, compliance, and impact assessment. Solaiman&rsquo;s critique of the AI Act&rsquo;s trust framework would gain substantial force if he specified what trust metrics in healthcare AI would look like. Patient satisfaction scores with AI-assisted diagnoses. Error rate comparisons between AI-augmented and unaugmented clinical decisions. Disclosure compliance rates. Algorithmic audit frequency. Without Metrics, the Bill of Rights becomes a statement of aspiration indistinguishable from the ethics guidelines Solaiman dismisses (citing Munn) as &ldquo;useless.&rdquo;</p>
<p><strong>Track Record: </strong>The AILCCP principle requires evaluation of an AI system&rsquo;s historical performance and the deploying organization&rsquo;s history of responsible AI practices. Solaiman&rsquo;s discussion of Conformit&eacute; Europ&eacute;enne (CE) marking&rsquo;s evidentiary weakness (safety evidence deferred to post-market) raises Track Record concerns implicitly but does not propose that a Bill of Rights require transparent performance histories accessible to patients or clinicians.</p>
<p><strong>Governance (as an AILCCP principle): </strong>Solaiman proposes institutional mechanisms (standing committees, responsible persons) but does not articulate the broader oversight architecture. Who oversees the standing committees? What is their reporting obligation? How is their independence secured? The Governance principle requires institutional controls that are themselves subject to review. Solaiman&rsquo;s proposal has no second-order oversight.</p>
<h5>Life Cycle Phase Coverage</h5>
<p>Solaiman&rsquo;s analysis clusters around two AILCCP phases: Pre-Deployment Review (conformity assessment) and Deployment &amp; Release (point-of-care rights). This is characteristic of a market-access regulatory framework. It leaves gaps in the phases where regulation, institutions, and patients intersect.</p>
<p><strong>Data Preparation</strong>.&nbsp;Solaiman raises training data bias as a trust concern in Section 4.2 but does not connect it to data pipeline oversight. Bias originates in this phase. His Bill of Rights proposes to address it at the point of care. By then it is baked in.</p>
<p><strong>Evaluation &amp; Red Teaming.</strong>&nbsp;The AI Act contemplates deployer-side testing obligations, and Solaiman&rsquo;s own ecosystem argument implies that evaluation cannot occur solely at the developer level. A healthcare institution deploying AI against a specific patient population has an independent obligation to test. The Bill of Rights does not address this.</p>
<p><strong>Operations &amp; Monitoring.</strong>&nbsp;Solaiman&rsquo;s standing committee proposal implicitly touches this phase, but continuous monitoring of system performance is not addressed. Trust erodes if a deployed system degrades and nobody is watching.</p>
<p><strong>Incident Response.</strong>&nbsp;Solaiman proposes a right to redress, which implies a triggering event, which implies a protocol. The Bill of Rights does not specify one.</p>
<p><strong>Decommissioning &amp; Archiving.</strong>&nbsp;Not addressed. What are the patient&rsquo;s rights regarding decisions made by a healthcare AI system that has been retired?</p>
<h5>The Socio-Technical Ecosystem Argument and Its Unfinished Work</h5>
<p>Section 4.1 argues, drawing on Unver et al., that trust must derive from the operation of the entire socio-technical ecosystem within which AI exists as one component. Trust in AI as a &ldquo;standalone device&rdquo; is, on this account, conceptually incoherent because liability systems rest on the responsibility of the clinician or employer, not the tool. &ldquo;Trust in AI&rdquo; as a standalone object is a category error. The meaningful locus of trust is the clinician, provider, or institution that stands behind the tool in a web of legal and ethical obligations.</p>
<p>The Bill of Rights does not carry this insight forward. It reproduces the very reductionism the article criticizes.</p>
<p>The rights in Section 4.2 are framed as claims against &ldquo;the AI&rdquo; or its immediate use, not as relational claims allocated across institutional actors in the ecosystem. There is no parallel articulation of duties for providers (deployment, monitoring, override, decommissioning), developers (updating, post-market surveillance, data stewardship), or regulators (transparency of notified bodies, management of misaligned incentives). Solaiman&rsquo;s argument establishes that trust is ecosystem-dependent, then proposes a remedy addressed to a single technology layer. The Bill of Rights silently inherits the AI Act&rsquo;s system-centric orientation, the same orientation Section 4.1 identifies as structurally inadequate.</p>
<p>The socio-technical framing also strengthens the PSC critique. If trust is ecosystem-level, then the AI Act&rsquo;s market-access conformity model, which evaluates a product at a single point in time before deployment, is even more inadequate than a product-level critique would suggest. A voluntary Bill of Rights layered on top of that model inherits its structural limitations. Section 4.1&rsquo;s own logic undermines Section 4.2&rsquo;s remedy.</p>
<p>Solaiman&rsquo;s socio-technical ecosystem argument maps naturally onto a life cycle view of AI. Trust is not established once at market entry and preserved automatically. It must be maintained through continuous oversight, incident response when things go wrong, and managed transitions when systems are updated or retired. If those phases are ungoverned, trust formed at the point of deployment will erode.</p>
<p>This means an ecosystem-aligned Bill of Rights cannot simply declare patient-facing values. It must allocate specific duties to specific actors. Hospitals, clinicians, developers, vendors, auditors, and regulators each play a role in sustaining trust across the AI life cycle. The Bill of Rights should specify what each owes, through mechanisms like performance reporting, independent oversight, and patient contestation rights.</p>
<p>Section 4.1 points toward this recognition. Section 4.2 does not deliver on it. The prescriptive proposal retreats to a static list of values addressed to a single technology layer, abandoning the ecosystem logic that Section 4.1 established.</p>
<h5>Standards Context</h5>
<p>Solaiman references standards from the International Organization for Standardization and International Electrotechnical Commission (ISO/IEC) in passing (through the AI HLEG Guidelines) but does not engage specific ones. ISO/IEC 42001:2023 (AI management systems) bears directly on the oversight architecture his Bill of Rights would require. ISO/IEC 23894:2023 (AI risk management) bears on his risk-versus-trust argument. The absence of standards engagement weakens the paper&rsquo;s prescriptive force.</p>
<h4>III. Argumentative Assessment</h4>
<h5>Thesis Architecture</h5>
<p>Solaiman&rsquo;s thesis is that trust in healthcare AI &ldquo;could be better fostered&rdquo; through a Bill of Rights. The conditional mood is telling. The thesis hedges where it should assert. A stronger formulation: the AI Act structurally cannot produce trust in healthcare because trust is a domain-specific, relational phenomenon and the Act is a horizontal, technical regulation. The Bill of Rights is not a &ldquo;could be&rdquo; improvement. It is a necessary supplement if the regulatory regime is to achieve its stated aims.</p>
<p>The thesis is also overbroad. It promises to examine what trust means in healthcare, how the AI Act incorporates trust, whether its provisions enhance trust, and what should be done to bridge the gap. That is four distinct arguments in a seventeen-page paper. Section 2 (the concept of trust in healthcare) receives approximately two and a half pages (pp. 320-322). Section 3 (trust and the AI Act) spans approximately six pages (pp. 322-327). Section 4.1 (systemic considerations) receives three pages (pp. 328-331), developing the socio-technical ecosystem argument that the paper then abandons. Section 4.2 (values for trust), the prescriptive core, receives approximately two and a half pages (pp. 331-333) to articulate the entire Bill of Rights proposal. The descriptive groundwork in Section 3 may be necessary. But Section 4.2 is underdeveloped relative to the weight the paper asks it to bear. Two and a half pages to specify a new governance instrument is not enough, especially when the paper&rsquo;s own Section 4.1 establishes that trust is ecosystem-dependent and therefore requires duties allocated across multiple institutional actors.</p>
<h5>Counterargument Treatment</h5>
<p>The paper&rsquo;s most significant argumentative weakness is its failure to engage the strongest version of the opposing position. The position that risk regulation <em>can</em> produce trust indirectly deserves serious treatment. The strongest version of this counterargument: systematically reducing the probability of harm, even through technocratic mechanisms, creates conditions under which trust emerges through experience. People trust automobiles not because they read safety regulation. They trust automobiles because safety regulation reduced harm rates over decades. Solaiman&rsquo;s implicit assumption that trust must be <em>directly</em> cultivated through rights-based mechanisms deserves interrogation. The paper does not provide it.</p>
<h5>The Realist Gap</h5>
<p>The paper does not examine what healthcare institutions would actually do with a voluntary Bill of Rights, and its own evidence makes this omission structurally damaging.</p>
<p>Solaiman cites Unver et al. for the proposition that trust requires &ldquo;designating responsibilities for different staff, such as doctors and clinicians, and outlining how the safeguards govern workflows concerning diagnosis and treatment using AI tools&rdquo; (p. 330). He then proposes a Bill of Rights that does not designate responsibilities for any specific staff, does not outline any workflow safeguards, and does not address the clinical settings in which the rights would be exercised.</p>
<p>Consider the following scenario. A mid-sized European hospital runs diagnostic imaging AI from one vendor, a clinical decision support system from another, and a patient monitoring platform from a third. Each system has different explainability characteristics, different data pipelines, different update cycles, and different contractual terms. Solaiman&rsquo;s Bill of Rights proposes that the patient receive &ldquo;meaningful disclosure of information that experts can access.&rdquo; Which expert? The radiologist who uses the imaging AI, the information technology (IT) administrator who manages the platform, or the procurement officer who negotiated the contract? The Bill of Rights does not say. A &ldquo;standing committee within the hospital setting convened to examine AI incidents&rdquo; must be resourced, staffed, and given authority. Who funds it? From which budget line? What is its jurisdiction when the AI vendor&rsquo;s terms of service disclaim liability for the output the committee is examining?</p>
<p>These questions are not rhetorical embellishments. They are the operational conditions under which the Bill of Rights would succeed or fail. Solaiman&rsquo;s Section 4.1 establishes that trust is ecosystem-dependent. His Section 4.2 proposes a remedy that ignores every institutional actor in the ecosystem except the patient.</p>
<p>The incentive structure is also unaddressed. EU hospitals operate under resource constraints, national regulatory variation, and competitive pressure. A voluntary code imposes compliance costs (committee formation, disclosure infrastructure, staff training) with no corresponding enforcement benefit. Hospitals that adopt the Bill of Rights bear costs. Hospitals that ignore it face no consequence. In this environment, voluntary adoption is not a governance strategy. It is a selection mechanism for institutions already inclined toward compliance.</p>
<h4>IV. Synthesis</h4>
<p>The word &ldquo;trust&rdquo; appears twice in 245 pages of the AI Act&rsquo;s articles, revealing that trust functions as a rhetorical frame for the regulation rather than an operative concept within it. The conflation of risk acceptability with trustworthiness, drawing on Laux, Wachter, and Mittelstadt, identifies a structural deficiency in the Act&rsquo;s conceptual architecture. CE marking&rsquo;s reliance on deferred evidence of safety and effectiveness compounds this deficiency in the healthcare domain.</p>
<p>The proposed remedy, however, reproduces the problem.</p>
<p>The Bill of Rights proposal reproduces the reductionism it diagnoses in the AI Act. It invokes values without operationalizing them. It endorses trust without specifying how to measure it. It proposes voluntary mechanisms when the argument&rsquo;s logic demands binding ones. It frames rights as claims against the AI system rather than as relational obligations distributed across the ecosystem that Section 4.1 identifies as the actual locus of trust. And it treats AI as a product entering a market rather than a system traversing a life cycle.</p>
<p>From an AILCCP perspective, the paper&rsquo;s most significant gap is Metrics. Without measurable indicators of trust, the Bill of Rights becomes precisely what Solaiman (citing Munn) accuses the AI HLEG Guidelines of being. From a PSC perspective, the proposal creates further process (charter development, voluntary adoption, national implementation) without a mechanism to ensure that process produces governance. The very trap the AI Act fell into.</p>
<p>The paper would benefit from engaging AILCCP&rsquo;s life cycle model to recognize that trust is not established at a single regulatory moment (market access) but must be maintained across the entire AI life cycle, from Data Preparation through Decommissioning. It would also benefit from specifying concrete Metrics for healthcare trust, anchoring the Bill of Rights to ISO/IEC standards that provide implementable controls, and confronting the enforcement question honestly.</p>
<p>A voluntary Bill of Rights for AI in healthcare, absent enforcement mechanisms and measurable standards, is a document that trusts the healthcare system to do what it has historically resisted doing without compulsion. That is not a governance strategy. That is a hope.</p>]]></content>
	<updated>2026-03-02T00:25:37+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-03-02T00:25:37+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="ai healthcare"/>

	<category term="artificial intelligence"/>

	<category term="eran kahana"/>

	<category term="eu ai act"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-27:/281134</id>
	<link href="https://www.gautrais.com/conferences/6138/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=6138" rel="alternate" type="text/html"/>
	<title type="html">Journée Éthique et Droit, HEC centre-ville, salle A. 536 MNP, Montréal (en présentiel)(27 février 2026)</title>
	<summary type="html"><![CDATA[<p>La Journ&eacute;e &Eacute;thique et Droit est un &eacute;v&eacute;nement co-organis&eacute; par l&rsquo;axe Droit, cybers&eacute;curit&eacute; et cyb...</p>]]></summary>
	<content type="html"><![CDATA[<p>La Journ&eacute;e &Eacute;thique et Droit est un &eacute;v&eacute;nement co-organis&eacute; par l&rsquo;<a href="https://www.obvia.ca/recherche/axes/droit-cyberjustice-et-cybersecurite" rel="noopener noreferrer" target="_blank">axe Droit, cybers&eacute;curit&eacute; et cyberjustice</a>&nbsp;et l&rsquo;<a href="https://www.obvia.ca/recherche/axes/ethique-gouvernance-et-democratie" rel="noopener noreferrer" target="_blank">axe &Eacute;thique, gouvernance et d&eacute;mocratie</a>, en collaboration avec l&rsquo;ID&Eacute;A. Elle explore les tensions fondamentales qui &eacute;mergent &agrave; l&rsquo;intersection du droit et de l&rsquo;&eacute;thique face aux d&eacute;fis de l&rsquo;intelligence artificielle.</p>
<p>Cet &eacute;v&eacute;nement a pour objectif de stimuler les &eacute;changes interdisciplinaires en valorisant les points de tension comme sources d&rsquo;enrichissement mutuel. Le format privil&eacute;gie le dialogue structur&eacute; et interactif entre chercheurs, professeurs et doctorants soigneusement s&eacute;lectionn&eacute;s, permettant &agrave; chacun de comprendre comment le droit et l&rsquo;&eacute;thique peuvent se nourrir mutuellement malgr&eacute; leurs logiques distinctes. Cette journ&eacute;e r&eacute;unit des participants choisis du milieu acad&eacute;mique pour explorer les enjeux critiques qui fa&ccedil;onnent la gouvernance responsable de l&rsquo;intelligence artificielle.</p>
<p>&nbsp;</p>
<h2><strong>Structure de la journ&eacute;e</strong></h2>
<p>La journ&eacute;e s&rsquo;articulera autour de quatre panels d&rsquo;une heure, entrecoup&eacute;s d&rsquo;une pause d&eacute;jeuner:</p>
<ul>
<li><strong>9 h 30 &ndash; 10 h 30</strong>&nbsp;: Audit des syst&egrave;mes l&rsquo;IA, anim&eacute; par S&eacute;bastien Gambs</li>
<li><strong>11 h 00 &ndash; 12 h 00</strong>&nbsp;:&nbsp;L&rsquo;&eacute;thique de l&rsquo;IA&nbsp;: institutionnalisation et gouvernance, anim&eacute; par Hazar Haidar</li>
</ul>
<p><strong><em>Pause lunch</em></strong></p>
<ul>
<li><strong>13 h 00 &ndash; 14 h 00</strong>&nbsp;: Pluralit&eacute; normative et &eacute;thique juridique, anim&eacute; par Vincent Gautrais</li>
<li><strong>14 h 30 &ndash; 15 h 30</strong>&nbsp;: Tensions &eacute;thiques et droit, anim&eacute; par Emmanuelle Marceau</li>
</ul>
<p><strong><em>Mot de la fin</em></strong></p>]]></content>
	<updated>2026-02-27T16:21:55+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-02-27T16:21:55+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-27:/281067</id>
	<link href="https://www.gautrais.com/presse/les-zones-grises-de-lintelligence-artificielle-a-luniversite/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=les-zones-grises-de-lintelligence-artificielle-a-luniversite" rel="alternate" type="text/html"/>
	<title type="html">Les zones grises de l’intelligence artificielle à l’université (Affaires universitaires, 26 février 2026)</title>
	<summary type="html"><![CDATA[<p>&Agrave; l&rsquo;heure o&ugrave; l&rsquo;intelligence artificielle transforme les pratiques universitaires, les d&eacute;fis juridiq...</p>]]></summary>
	<content type="html"><![CDATA[<div>
<p>&Agrave; l&rsquo;heure o&ugrave; l&rsquo;intelligence artificielle transforme les pratiques universitaires, les d&eacute;fis juridiques et &eacute;thiques se multiplient. Prot&eacute;ger la vie priv&eacute;e, lutter contre les biais et encadrer l&rsquo;utilisation de ces outils deviennent des priorit&eacute;s.</p>
</div>
<div>
<div>
<p>L&rsquo;intelligence artificielle (IA) poursuit son irruption dans les universit&eacute;s. Elle s&rsquo;insinue dans les processus administratifs aussi bien que dans l&rsquo;enseignement et la recherche. Or, on &eacute;value encore mal les risques l&eacute;gaux qu&rsquo;elle engendre.</p>
<p>En f&eacute;vrier 2024, la Commissaire &agrave; l&rsquo;information et &agrave; la protection de la vie priv&eacute;e de l&rsquo;Ontario (CIPVP), Patricia&nbsp;Kosseim, a reproch&eacute; &agrave; l&rsquo;Universit&eacute; McMaster d&rsquo;avoir port&eacute; atteinte &agrave; la vie priv&eacute;e de personnes &eacute;tudiantes. La d&eacute;cision concerne l&rsquo;utilisation du logiciel&nbsp;Respondus&nbsp;pour surveiller des examens faits &agrave; distance. Le logiciel enregistre&nbsp;l&rsquo;image et le son des personnes &eacute;tudiantes pendant leur examen et emploie l&rsquo;IA pour identifier des indices de tricherie.</p>
<h4><strong><a href="https://www.affairesuniversitaires.ca/articles-de-fond-fr/les-zones-grises-de-lintelligence-artificielle-a-luniversite/" rel="noopener noreferrer" target="_blank">pour en savoir +</a></strong></h4>
<p>&nbsp;</p>
</div>
</div>]]></content>
	<updated>2026-02-27T02:48:36+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-02-27T02:48:36+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-26:/281066</id>
	<link href="https://www.gautrais.com/blogue/2026/02/26/france-travail-ecope-dune-sanction-administrative-de-5-millions-e-et-une-injonction-par-la-cnil/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=france-travail-ecope-dune-sanction-administrative-de-5-millions-e-et-une-injonction-par-la-cnil" rel="alternate" type="text/html"/>
	<title type="html">France Travail écope d’une sanction administrative de 5 millions (€) et une injonction par la CNIL</title>
	<summary type="html"><![CDATA[<p>LEAD Technologies Inc. V1.01
Rami Haddad est &eacute;tudiant dans le cadre du cours DRT6929 (Vie priv&eacute;e + N...</p>]]></summary>
	<content type="html"><![CDATA[<div><a href="https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-scaled.jpg" rel="noopener noreferrer" target="_blank"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-6131" src="https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-475x713.jpg" alt="" srcset="https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-475x713.jpg 475w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-975x1463.jpg 975w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-768x1152.jpg 768w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-1024x1536.jpg 1024w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-1365x2048.jpg 1365w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-725x1088.jpg 725w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-scaled.jpg 1707w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-475x713.jpg 475w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-975x1463.jpg 975w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-768x1152.jpg 768w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-1024x1536.jpg 1024w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-1365x2048.jpg 1365w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-725x1088.jpg 725w,https://www.gautrais.com/files/sites/185/2026/02/Rami-Haddad-scaled.jpg 1707w" sizes="auto, (max-width: 161px) 100vw, 161px" referrerpolicy="no-referrer"></a><p>LEAD Technologies Inc. V1.01</p></div>
<p><strong>Rami Haddad est &eacute;tudiant dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)</strong></p>
<p><span>FRANCE TRAVAIL a fait l&rsquo;objet d&rsquo;une </span><span lang="EN-CA"><a href="https://www.cnil.fr/fr/violation-de-donnees-sanction-5millions-france-travail" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">sanction administrative de 5 millions (&euro;) et d&rsquo;une injonction</span></a></span> <span>par la Commission Nationale de l&rsquo;Informatique et des Libert&eacute;s (&laquo;&nbsp;<b>CNIL</b>&nbsp;&raquo;) en date du 22 janvier 2026 visant &agrave; assurer la mise en &oelig;uvre effective des mesures correctives d&eacute;coulant de son manquement &agrave; l&rsquo;obligation d&rsquo;assurer la s&eacute;curit&eacute; du traitement des donn&eacute;es personnelles qu&rsquo;il d&eacute;tient en vertu du r&egrave;glement qui encadre la protection des donn&eacute;es personnels au sein de l&rsquo;Union Europ&eacute;enne (&laquo;&nbsp;R&egrave;glement G&eacute;n&eacute;ral sur la Protection des Donn&eacute;es&nbsp;&raquo; ou &laquo;&nbsp;<b>RGPD</b>&nbsp;&raquo;). &nbsp;Pour des fins de pr&eacute;cisions, la CNIL<i> </i>est l&rsquo;autorit&eacute; administrative ind&eacute;pendante cr&eacute;&eacute; par la loi en France pour veiller &agrave; la protection des donn&eacute;es personnelles et jouit d&rsquo;un pouvoir d&rsquo;investigation, de contr&ocirc;le et de sanctions en cas de violations aux dispositions des lois et r&egrave;glements en mati&egrave;re de protection de la vie priv&eacute;e.</span></p>
<h2><b><span>1.<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></span></b><b><span>Contexte</span></b><span>&nbsp;:</span></h2>
<p><span>FRANCE TRAVAIL est un &eacute;tablissement public administratif sous la tutelle du Minist&egrave;re du Travail en France dont ses fonctions sont d&eacute;finies par l&rsquo;article L. 5214-3-1 du code du travail de la France. Ces fonctions incluent notamment, l&rsquo;accompagnement des demandeurs dans leur demande d&rsquo;emploi et la gestion de l&rsquo;offre d&rsquo;emploi dans le cadre du processus de recrutement et d&rsquo;embauche. FRANCE TRAVAIL a &eacute;galement comme fonction d&rsquo;assurer l&rsquo;accompagnement adapt&eacute; aux besoins de personnes qui ont &eacute;t&eacute; qualifi&eacute; de travailleur handicap&eacute; et qui sont b&eacute;n&eacute;ficiaires de l&rsquo;obligation d&rsquo;emploi.&nbsp; Elle collabore aussi &eacute;troitement avec CAP EMPLOI, un organisme de placements sp&eacute;cialis&eacute;s, qui est structur&eacute; de fa&ccedil;on autonome et ind&eacute;pendante de FRANCE TRAVAIL. CAP EMPLOI accompagne autour de 20 % des personnes qualifi&eacute;es de travailleur handicap&eacute; inscrites aupr&egrave;s de FRANCE TRAVAIL. Pour &eacute;viter la segmentation de cet accompagnement, une offre de service unifi&eacute; entre FRANCE TRAVAIL et CAP EMPLOI permet depuis 2018 que cet accompagnement soit fait au sein de FRANCE TRAVAIL peu importe que le conseiller r&eacute;f&eacute;rent soit un employ&eacute; de l&rsquo;une ou l&rsquo;autre de ces deux entit&eacute;s. Les 2300 employ&eacute;s de CAP EMPLOI pouvaient donc acc&eacute;der &agrave; distance au syst&egrave;me d&rsquo;information de FRANCE TRAVAIL pour accompagner ces demandeurs.</span></p>
<h2><b><span>2.<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></span></b><b><span>L&rsquo;intrusion externe et l&rsquo;exfiltration de donn&eacute;es personnelles</span></b></h2>
<p><span>En date du 29 f&eacute;vrier 2024, des activit&eacute;s qualifi&eacute;es d&rsquo;anormales ont &eacute;t&eacute; d&eacute;tect&eacute; sur le syst&egrave;me de mesures de performance du syst&egrave;me d&rsquo;information de FRANCE TRAVAIL. Cependant, ce n&rsquo;est que quelques jours plus tard (5 mars) que l&rsquo;alerte a &eacute;t&eacute; observ&eacute; par les employ&eacute;s de FRANCE TRAVAIL menant par la suite &agrave; une investigation interne. C&rsquo;est &agrave; la suite de celle-ci que FRANCE TRAVAIL a constat&eacute; qu&rsquo;une intrusion externe avait eu lieu par l&rsquo;utilisation de techniques qualifi&eacute;s d&rsquo;</span><span lang="EN-CA"><a href="https://www.consilium.europa.eu/fr/policies/cybersecurity-social-engineering/" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">&laquo;&nbsp;ing&eacute;nierie sociale&nbsp;</span></a></span><span>&raquo; par des acteurs externes et s&rsquo;est &eacute;tendu du 6 f&eacute;vrier au 5 mars 2024. L&rsquo;investigation fait aussi constat que ce sont les comptes de conseillers de CAP EMPLOI qui ont &eacute;t&eacute; usurp&eacute; par ces techniques d&rsquo;ing&eacute;nierie sociale pour ensuite acc&eacute;der &agrave; l&rsquo;environnement informatique de FRANCE TRAVAIL. Les acteurs externes ont par la suite eu acc&egrave;s &agrave; plusieurs types de donn&eacute;es personnelles incluant des donn&eacute;es consid&eacute;r&eacute;es comme &laquo;&nbsp;<i>sensibles</i>&nbsp;&raquo; tel que le num&eacute;ro d&rsquo;inscription au r&eacute;pertoire (NIR) (&eacute;quivalent au Num&eacute;ro d&rsquo;Assurance Sociale (NAS) du Qu&eacute;bec) et ont r&eacute;ussi &agrave; extirper plus de 25 giga octets (Go) de donn&eacute;es personnelles concernant plus de 36 millions de personnes dans la base de donn&eacute;es de FRANCE TRAVAIL. &Agrave; la suite de ces constations, FRANCE TRAVAIL a inform&eacute; la CNIL le 8 mars 2024 de cette exfiltration de donn&eacute;es personnelles massive lors de l&rsquo;intrusion externe par les acteurs externes.</span></p>
<h2><b><span>3.<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></span></b><b><span>Motifs de la </span></b><span lang="EN-CA"><a href="https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000053408671?query=loi+machines+%C3%A0+sous+en+ligne+2026&amp;searchField=ALL&amp;tab_selection=all" rel="noopener noreferrer" target="_blank"><b><span lang="FR-CA">d&eacute;lib&eacute;ration de la formation restreinte n&deg; SAN&ndash;2026-003 du 22 janvier 2026</span></b></a></span><b></b></h2>
<p><span>Les mesures techniques et organisationnelles appropri&eacute;es qui doivent &ecirc;tre mise en &oelig;uvre par le responsable du traitement doivent tenir compte &laquo;&nbsp;</span><i><span>[&hellip;] </span></i><i><span>de la nature, de la port&eacute;e, du contexte et des finalit&eacute;s du traitement ainsi que des risques, dont le degr&eacute; de probabilit&eacute; et de gravit&eacute; varie, pour les droits et libert&eacute;s des personnes physiques </span></i><i><span>[&hellip;]</span></i><span>&raquo; selon </span><span lang="EN-CA"><a href="https://www.cnil.fr/fr/reglement-europeen-protection-donnees/chapitre4#Article24:~:text=1%20%2D%20Obligations%20g%C3%A9n%C3%A9rales-,Article%2024%20%2D%20Responsabilit%C3%A9%20du%20responsable%20du%20traitement,-Compte%20tenu%20de" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">l&rsquo;article 24 (1)</span></a></span><span> du RGPD. </span><span lang="EN-CA"><a href="https://www.cnil.fr/fr/reglement-europeen-protection-donnees/chapitre4#Article32:~:text=S%C3%A9curit%C3%A9%20du%20traitement-,Compte%20tenu%20de%20l%27%C3%A9tat%20des%20connaissances%2C%20des%20co%C3%BBts%20de%20mise%20en,des%20mesures%20techniques%20et%20organisationnelles%20pour%20assurer%20la%20s%C3%A9curit%C3%A9%20du%20traitement.,-Lors%20de%20l%27%C3%A9valuation" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">L&rsquo;article 32 (1)</span></a></span><span> du RGPD impose l&rsquo;obligation au responsable du traitement de garantir un niveau de s&eacute;curit&eacute; adapt&eacute; au risque selon des mesures techniques et organisationnelles que celui-ci doit mettre en place. </span><span lang="EN-CA"><a href="https://www.cnil.fr/fr/reglement-europeen-protection-donnees/chapitre4#Article32:~:text=Lors%20de%20l%27%C3%A9valuation,accidentelle%20ou%20illicite" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">L&rsquo;article 32 (2)</span></a></span><span> du RGPD nous informe sur les crit&egrave;res d&rsquo;&eacute;valuation du niveau de s&eacute;curit&eacute; appropri&eacute; qui doit &ecirc;tre mis en place par le responsable du traitement. L&rsquo;enqu&ecirc;te de la CNIL a r&eacute;v&eacute;l&eacute; que FRANCE TRAVAIL a contrevenu &agrave; ses obligations en vertu de l&rsquo;article 32 (1) du RGPD en tant que responsable du traitement s&eacute;curitaire des donn&eacute;es personnelles qu&rsquo;elle d&eacute;tient en vertu de l&rsquo;article 4, alin&eacute;a 7 du RGPD. Bien que FRANCE TRAVAIL argument&acirc;t que CAP EMPLOI avait sa part de responsabilit&eacute; dans la mise en application des r&egrave;gles de s&eacute;curit&eacute;, la CNIL a d&eacute;termin&eacute; que selon les faits, c&rsquo;est FRANCE TRAVAIL qui avait la responsabilit&eacute; principale de mettre en &oelig;uvre des mesures applicables pour assurer la s&eacute;curit&eacute; de son syst&egrave;me d&rsquo;information et dont son acc&egrave;s &eacute;tait ouvert &agrave; CAP EMPLOI pour l&rsquo;accompagnement des demandeurs qualifi&eacute; de travailleur handicap&eacute;.</span></p>
<h3><b><i>3.1.<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></i></b><b><i>Au regard de la s&eacute;curit&eacute; du traitement des donn&eacute;es et la gestion des habilitations et limitation d&rsquo;acc&egrave;s</i></b></h3>
<p><span>La commission restreinte de la CNIL a jug&eacute; que les mesures de s&eacute;curit&eacute; organisationnelle mise en place par FRANCE TRAVAIL pour le traitement des donn&eacute;es n&rsquo;&eacute;taient pas suffisamment &eacute;lev&eacute;es compte tenu de la nature, la port&eacute;e, le contexte et les finalit&eacute;s du traitement des donn&eacute;es &agrave; caract&egrave;re personnel (</span><span lang="EN-CA"><a href="https://www.cnil.fr/fr/reglement-europeen-protection-donnees/chapitre2#Article9:~:text=l%27%C3%A9gard%20d%27un%20enfant.-,Article%209%20%2D%C2%A0Traitement%20portant%20sur%20des%20cat%C3%A9gories%20particuli%C3%A8res%20de%20donn%C3%A9es%20%C3%A0,g%C3%A9n%C3%A9tiques%2C%20des%20donn%C3%A9es%20biom%C3%A9triques%20ou%20des%20donn%C3%A9es%20concernant%20la%20sant%C3%A9.,-Article%2010%20%2D%C2%A0Traitement" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">article 9 RGPD</span></a></span><span>) trait&eacute;es par FRANCE TRAVAIL, notamment le NIR ainsi que les donn&eacute;es de sant&eacute; requis par CAP EMPLOI en lien avec demandeurs qui sont qualifi&eacute;s de travailleurs handicap&eacute;, malgr&eacute; que ces donn&eacute;es de sant&eacute; n&rsquo;ont pas fait l&rsquo;objet de la violation. La CNIL fait donc &eacute;tat du volume et la nature sensible des donn&eacute;es personnelles qui sont confi&eacute;es et dont FRANCE TRAVAIL doit traiter lorsqu&rsquo;il &eacute;value les mesures techniques et organisationnelle qui sont mise en place pour leur traitement.</span></p>
<p><span>De plus, bien que les comptes des conseillers de CAP EMPLOI &eacute;taient param&eacute;tr&eacute;s selon des profils d&rsquo;habilitation et le principe du moindre privil&egrave;ge par FRANCE TRAVAIL, l&rsquo;enqu&ecirc;te de la CNIL a d&eacute;voil&eacute; que ces conseillers pouvaient quand m&ecirc;me acc&eacute;der aux donn&eacute;es personnelles de toutes les personnes qui se retrouvaient dans la base de donn&eacute;es de cette derni&egrave;re et que cet acc&egrave;s n&rsquo;&eacute;tait pas limit&eacute; sur la base de la n&eacute;cessit&eacute; de connaitre l&rsquo;information par les conseillers voulant y acc&eacute;der. </span><span>&nbsp;</span><span>&Agrave; cet effet, la CNIL cite le principe de &laquo;&nbsp;<b>d&eacute;fense en profondeur&nbsp;appliqu&eacute;e aux syst&egrave;mes d&rsquo;information</b>&nbsp;&raquo; dans le </span><span lang="EN-CA"><a href="https://messervices.cyber.gouv.fr/documents-guides/mementodep-v1-1.pdf" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">M&eacute;mento sur le concept de la d&eacute;fense en profondeur appliqu&eacute; aux SI</span></a></span> <span>r&eacute;dig&eacute; par le bureau conseil de la Direction centrale de la s&eacute;curit&eacute; des syst&egrave;mes d&rsquo;information (DCSSI) version 1.1 du 19 juillet 2004 et qui est repris par l&rsquo;Agence nationale de la s&eacute;curit&eacute; des syst&egrave;mes d&rsquo;information (&laquo;&nbsp;<b>ANSSI</b>&nbsp;&raquo;):</span></p>
<blockquote>
<p><span>&laquo;&nbsp;</span><i><span>La d&eacute;fense en profondeur consiste donc &agrave; opposer aux menaces des lignes d&eacute;fense coordonn&eacute;es et ind&eacute;pendantes. <b><u>Sur le plan des technologies cela peut signifier par exemple que la compromission d&rsquo;un service r&eacute;seau ne doit pas permettre d&rsquo;obtenir les droits les plus &eacute;lev&eacute; sur l&rsquo;ensemble du syst&egrave;me</u></b>. Dans ce contexte, donner des droits d&rsquo;administration &agrave; tous les utilisateurs d&rsquo;un syst&egrave;me est contraire &agrave; la d&eacute;fense en profondeur. En mati&egrave;re de protection de l&rsquo;information cela peut aussi signifier que le chiffrement au niveau applicatif n&rsquo;est en soi pas suffisant et qu&rsquo;il pourrait &ecirc;tre n&eacute;cessaire de prot&eacute;ger &eacute;galement la couche IP. La d&eacute;fense en profondeur a donc pour cons&eacute;quence de ne pas faire reposer la s&eacute;curit&eacute; sur un &eacute;l&eacute;ment mais sur un ensemble coh&eacute;rent. <b><u>Cela signifie donc qu&rsquo;il ne doit en th&eacute;orie pas exister de point sur lequel tout l&rsquo;&eacute;difice repose </u></b></span></i><i><span>[&hellip;]</span></i><i><span>&nbsp;&raquo;</span></i></p>
</blockquote>
<p><span>FRANCE TRAVAIL a donc manqu&eacute; &agrave; son obligation de l&rsquo;article 32 du RGPD en ne s&rsquo;assurant pas que les comptes des conseillers de CAP EMPLOI &eacute;taient ad&eacute;quatement restreints pour &eacute;viter que les personnes n&rsquo;ayant pas le besoin de connaitre l&rsquo;information puissent y acc&eacute;der de fa&ccedil;on illicite. Ces mesures de s&eacute;curit&eacute; auraient pu pr&eacute;venir l&rsquo;exfiltration des donn&eacute;es personnelles &agrave; travers un acc&egrave;s illicite des comptes des conseillers CAP EMPLOI lors de l&rsquo;intrusion par les acteurs externes.</span></p>
<h3><b><i><span>3.2.<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></span></i></b><b><i>Au regard des</i></b><b><i><span> mesures de s&eacute;curit&eacute;&nbsp;: qualit&eacute; du mot de passe, m&eacute;canisme de restrictions au compte et authentification &agrave; double facteur</span></i></b></h3>
<p><span>Dans sa</span> <span lang="EN-CA"><a href="https://kortlex.com/docs/deliberation-2022-100-du-21-juillet-2022_recommandation-aux-mots-de-passe.pdf" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">D&eacute;lib&eacute;ration n&deg; 2022-100 du 21 juillet 2022</span></a></span><span>, la CNIL a &eacute;mis des recommandations sur les mesures de s&eacute;curit&eacute; appropri&eacute;es pour les mots de passe et les restrictions d&rsquo;acc&egrave;s au comptes, entres autres&nbsp;: (i) la robustesse du mot de passe; (ii) le seuil de tentatives de connexions infructueuses &eacute;tabli &agrave; 10 et dans un d&eacute;lai recommand&eacute;; (iii) la mise en place de &laquo;<i>captcha</i>&raquo;, un m&eacute;canisme qui permet de bloquer les tentatives hostiles de connexions automatis&eacute;es et intensives et; (iv) l&rsquo;utilisation de l&rsquo;authentification &agrave; double facteur ou le certificat &eacute;lectronique. Or, la CNIL note que le choix de mot de passe (minimum de 8 caract&egrave;res dont un minimum de 3 caract&egrave;res sp&eacute;ciaux) exig&eacute; par FRANCE TRAVAIL &eacute;tait assez robuste. &nbsp;Cependant, le m&eacute;canisme de restriction au compte pr&eacute;voyait un verrouillage du compte seulement apr&egrave;s 50 tentatives de connexions infructueuses et aucun autre m&eacute;canisme de restriction n&rsquo;&eacute;tait mise en place. La CNIL juge donc que la vuln&eacute;rabilit&eacute; du m&eacute;canisme de restriction mise en place et l&rsquo;absence d&rsquo;autres m&eacute;canismes de restrictions pour acc&eacute;der au compte de FRANCE TRAVAIL et CAP EMPLOI ne respectaient pas les mesures de s&eacute;curit&eacute; recommand&eacute;es par la CNIL dans la D&eacute;lib&eacute;ration n&deg; 2022-100 du 21 juillet 2022 cit&eacute;e plus haut r&eacute;sultant &agrave; un manquement &agrave; l&rsquo;obligation de s&eacute;curit&eacute; m&ecirc;me si cette vuln&eacute;rabilit&eacute; n&rsquo;a pas &eacute;t&eacute; exploit&eacute; par les acteurs externes lors de l&rsquo;intrusion des syst&egrave;mes d&rsquo;information de FRANCE TRAVAIL.</span></p>
<h3><b><i><span>3.3.<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span></span></i></b><b><i>Au regard de la m</i></b><b><i><span>ise en place d&rsquo;un processus efficace de journalisation</span></i></b></h3>
<p>Dans sa <span lang="EN-CA"><a href="https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000044272396" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">D&eacute;lib&eacute;ration n&deg; 2021-122 du 14 octobre 2021</span></a></span><span>, la CNIL recommande<i> </i></span></p>
<blockquote>
<p><span><i>&laquo;&nbsp;[&hellip;] que les op&eacute;rations de cr&eacute;ation, consultation, modification et suppression des donn&eacute;es &agrave; caract&egrave;re personnel et des informations contenues dans les traitements auxquels la journalisation est appliqu&eacute;e fassent l&rsquo;objet d&rsquo;un enregistrement comprenant l&rsquo;auteur individuellement identifi&eacute;, l&rsquo;horodatage, la nature de l&rsquo;op&eacute;ration r&eacute;alis&eacute;e ainsi que la r&eacute;f&eacute;rence des donn&eacute;es concern&eacute;es par l&rsquo;op&eacute;ration[&hellip;]&nbsp;&raquo; et </i>la mise en &oelig;uvre d&rsquo;</span><span>&laquo;&nbsp;<i>[&hellip;] </i></span><i><span>un syst&egrave;me de traitement et d&rsquo;analyse des donn&eacute;es collect&eacute;es et de formaliser un processus permettant de g&eacute;n&eacute;rer des alertes et de les traiter en cas de suspicion de comportement anormal [&hellip;]&nbsp;</span></i><span>&raquo;. </span></p>
</blockquote>
<p><span>De plus, l&rsquo;ANSSI dans son guide intitul&eacute; &laquo;&nbsp;</span><span lang="EN-CA"><a href="https://messervices.cyber.gouv.fr/documents-guides/anssi-guide-recommandations_securite_architecture_systeme_journalisation.pdf" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">Recommandations de s&eacute;curit&eacute; pour l&rsquo;architecture d&rsquo;un syst&egrave;me de journalisation</span></a></span><span>&nbsp;r&eacute;it&egrave;re l&rsquo;importance de continuellement analyser les journaux d&rsquo;&eacute;v&egrave;nements afin de rep&eacute;rer toute activit&eacute; suspicieuse ou inhabituelle et de garder un archivage des journaux pour permettre de soulever des doutes apr&egrave;s coup. La journalisation est donc une mesure technique vitale qui permet de d&eacute;tecter, d&rsquo;analyser et de r&eacute;pondre aux incidents de s&eacute;curit&eacute;. La CNIL fait valoir que FRANCE TRAVAIL avait un processus de journalisation en place (avec identifiant interne technique et horodatage des actions effectu&eacute;es par les conseillers) lors de la survenance de l&rsquo;intrusion externe mais que l&rsquo;&eacute;chec de la d&eacute;tection des activit&eacute;s suspicieuses et l&rsquo;absence de d&eacute;clenchement des alertes d&eacute;montrent que le processus de journalisation mis en place par FRANCE TRAVAIL n&rsquo;&eacute;tait pas ad&eacute;quat face au risque auquel elle faisait face. De plus, la CNIL avait d&eacute;j&agrave; dans le pass&eacute; (d&eacute;lib&eacute;ration n&deg; 2022-050 du 21 avril 2022) averti FRANCE TRAVAIL de l&rsquo;importance de se doter d&rsquo;un bon syst&egrave;me de journalisation afin de pr&eacute;venir les incidents de s&eacute;curit&eacute;.</span></p>
<h2><b><span>4. La s&eacute;v&eacute;rit&eacute; des sanctions en fonctions de la nature de la violation en vertu du RGPD</span></b></h2>
<p><span>Le 13 janvier 2026, la CNIL a impos&eacute; une </span><span lang="EN-CA"><a href="https://www.cnil.fr/en/sanction-free-2026" rel="noopener noreferrer" target="_blank"><span lang="FR-CA">sanction administrative</span></a></span> <span>de &euro;42 millions aux entreprises fran&ccedil;aise FREE et FREE MOBILE en raison d&rsquo;un manquement &agrave; une obligation de l&rsquo;article 5 du RGPD reli&eacute; au <b>consentement</b>, pr&eacute;cis&eacute;ment d&rsquo;avoir conserv&eacute; des donn&eacute;es d&rsquo;anciens clients au-del&agrave; de la p&eacute;riode n&eacute;cessaire. Dans notre cas ici, l&rsquo;imposition de la sanction administrative de &euro;5 millions &agrave; FRANCE TRAVAIL &eacute;tait reli&eacute;e &agrave; un manquement &agrave; l&rsquo;article 32 du RGPD en lien avec le <b>traitement s&eacute;curitaire des donn&eacute;es personnelles</b>. Selon l&rsquo;article 83 du RGPD, la s&eacute;v&eacute;rit&eacute; des sanctions p&eacute;cuniaires impos&eacute;e varie selon la nature de la violation en vertu des dispositions du RGPD, o&ugrave; la sanction est plus s&eacute;v&egrave;re pour une violation de l&rsquo;article 5 du RGPD reli&eacute; au consentement (83 (4) RGDP &ndash; 4 % du chiffre d&rsquo;affaires) versus une sanction moins s&eacute;v&egrave;re pour une violation de l&rsquo;article 32 du RGPD en lien avec le traitement s&eacute;curitaire des donn&eacute;es (83 (5) RGPD &ndash; 2 % du chiffres d&rsquo;affaires). Pourtant, nous avons constat&eacute; que le piratage des syst&egrave;mes informatiques, qui a sans doute &eacute;t&eacute; facilit&eacute; par les diff&eacute;rents manquements aux dispositions de la RGPD, a r&eacute;sult&eacute; dans les deux cas mentionn&eacute;s plus haut &agrave; l&rsquo;exfiltration massive de donn&eacute;es (incluant des donn&eacute;es sensibles) concernant plusieurs millions de personnes physiques. La s&eacute;v&eacute;rit&eacute; de la sanction p&eacute;cuniaire impos&eacute; par le RGPD n&rsquo;est pas particuli&egrave;rement corr&eacute;lative avec le pr&eacute;judice qui peut d&eacute;couler des manquements aux dispositions du RGPD. Le RGPD impose donc des sanctions plus s&eacute;v&egrave;res en fonction de la <b>nature du manquement</b> plut&ocirc;t qu&rsquo;au <b>pr&eacute;judice r&eacute;sultant du manquement invoqu&eacute;</b>. &nbsp;</span></p>]]></content>
	<updated>2026-02-26T21:25:58+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-02-26T21:25:58+00:00</updated>
		<title>Vincent Gautrais</title></source>

	<category term="cours"/>

	<category term="mes étudiant-e-s"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-26:/281030</id>
	<link href="https://law.stanford.edu/2026/02/26/reproductive-due-process-procedural-justice-in-an-era-of-arbitrariness/" rel="alternate" type="text/html"/>
	<title type="html">Reproductive Due Process: Procedural Justice in an Era of Arbitrariness</title>
	<summary type="html"><![CDATA[<p>I. Introduction: The Invisible Constraint
The rapid evolution of reproductive technologies has force...</p>]]></summary>
	<content type="html"><![CDATA[<h3><strong>I. Introduction: The Invisible Constraint</strong></h3>
<p>The rapid evolution of reproductive technologies has forced courts and legislatures to confront questions that once seemed purely theoretical. In vitro fertilization (IVF) has become routine medical practice, while emerging techniques such as in vitro gametogenesis (IVG)&mdash;an experimental method of generating eggs or sperm from ordinary somatic cells such as skin cells&mdash;suggest that future reproduction may involve the large-scale creation and management of embryos.[1] Although such developments remain scientifically and politically contingent, advances in stem-cell-derived gametes and genomic sequencing could expand the scale of embryo creation and selection, thereby intensifying the regulatory and legal stakes of reproductive governance. Even at a preclinical stage, IVG suggests that reproduction may increasingly unfold within laboratory-based and institutionally structured frameworks.[2]</p>
<p>Since <em>Dobbs v. Jackson Women&rsquo;s Health Organization</em>,[3] legal debate has largely focused on whether reproductive autonomy remains protected as a substantive constitutional right.[4] This blog asks a different question: when the state restructures reproductive governance&mdash;by redefining embryos, prohibiting their disposition, or altering regulatory frameworks&mdash;what procedural obligations follow?</p>
<p>Substantive due process asks whether certain forms of state regulation impermissibly infringe constitutionally protected liberty interests; procedural due process concerns how it must regulate when doing so.[5] Even where no fundamental right is recognized, the Constitution requires fair procedures when the state deprives individuals of recognized liberty or property interests.[6] Research on procedural justice further suggests that the legitimacy of the state depends not only on outcomes, but on the fairness and predictability of decision-making processes.[7] In the assisted reproductive technologies context, the erosion of procedural safeguards risks replacing constitutional order with government arbitrariness.</p>
<h3><strong>II. The Vanishing Question of Procedure</strong></h3>
<p>Since <em>Dobbs v. Jackson Women&rsquo;s Health Organization</em> repudiated the constitutional right to abortion, debates over reproduction have largely been framed in binary terms: either reproductive autonomy is a fundamental right, or it is not.[8] What has followed is not a careful recalibration of how states regulate reproduction, but a proliferation of blunt legal interventions. Recent reporting by organizations such as the Guttmacher Institute indicates that multiple states have enacted or proposed legislation recognizing forms of fetal or embryonic personhood, often with significant implications for assisted reproductive technologies.[9]</p>
<p>Notably absent from these developments is sustained attention to procedural due process, the requirement that when the government deprives individuals of liberty or property, it must do so through fair and predictable procedures.[10] A skeptic might argue that if no substantive right exists, no procedural protection is triggered. This reasoning overlooks a central feature of due process doctrine. Procedural due process does not protect &ldquo;fundamental rights&rdquo; alone; it protects any &ldquo;liberty&rdquo; or &ldquo;property&rdquo; interest created by existing law, including interests shaped by state-created regulatory frameworks, contractual arrangements, or settled practices.[11] As the Supreme Court explained in <em>Perry v. Sindermann</em>, constitutionally protected property interests, for instance, may arise from &ldquo;mutually explicit understandings,&rdquo; even where no formal entitlement is guaranteed by statute.[12]</p>
<p>This blog does not contend that participation in assisted reproduction creates a freestanding constitutional right. Rather, it argues that once the state affirmatively structures, licenses, and regulates a reproductive medical framework, it assumes procedural obligations when altering that framework in ways that arbitrarily infringe settled reliance interests. Patients who have invested genetic material, significant financial resources, and years of medical reliance on IVF may have reliance interests analogous to the mutually explicit understandings recognized in procedural due process doctrine.</p>
<h3><strong>III. The Alabama Example</strong></h3>
<p>In most areas of law, decisions with profound personal consequences&mdash;termination of parental rights, involuntary commitment, or denial of public benefits&mdash;trigger procedural safeguards designed to ensure fairness and predictability.[13] The regulation of assisted reproductive technologies, however, increasingly operates outside this framework.</p>
<p>Critics may respond that general legislation does not require individualized hearings.[14] Under cases such as <em>Bi-Metallic Investment Co. v. State Board of </em>Equalization, 239 U.S. 441 (1915), when a rule applies to the public, due process does not mandate case-by-case adjudication.[15] That principle is well established. But the concern here is not the absence of individualized hearings; it is the absence of legitimate, transitional governance when legal rules are abruptly redefined in ways that disrupt settled reliance interests.</p>
<p>When a legislature or court suddenly reclassifies embryos, the deprivation of genetic material and decisional authority may follow automatically, without notice, safe-harbor periods, or prospective application. Procedural justice research suggests that individuals are more likely to accept regulatory outcomes when decision-making processes are perceived as fair, transparent, and respectful.[16] Abrupt legal shifts without transitional mechanisms undermine not only expectations but institutional legitimacy.</p>
<p>The Alabama Supreme Court&rsquo;s 2024 decision in <em>LePage v. Center for Reproductive Medicine </em>exemplifies this dynamic.[17] The court&rsquo;s interpretation of the state&rsquo;s wrongful death statute effectively reclassified embryos in a manner that significantly altered the legal framework within which IVF patients had structured their reproductive decisions, without providing a procedural mechanism to mitigate the consequences of that shift.</p>
<p>No notice period or prospective application was offered.[18] Had transitional safeguards been provided&mdash;such as prospective limitation of liability, grandfathering provisions, or time to transfer embryos&mdash;the regulatory change might have avoided the appearance of judicial arbitrariness. Instead, deprivation operated immediately and categorically.</p>
<p>As articulated in <em>Mathews v. Eldridge</em>, 424 U.S. 319 (1976)&mdash;a case involving the termination of Social Security disability benefits&mdash;due process analysis evaluates the private interest at stake, the risk of erroneous deprivation, and the probable value of additional procedural safeguards.[19] Although <em>Mathews</em> arose in an administrative benefits context rather than general legislation, its framework highlights a broader constitutional concern: when legal change creates a significant risk of unjustified deprivation of structured private interests, the availability of procedural mitigation mechanisms becomes normatively and institutionally consequential.</p>
<h3><strong>IV. Why IVG Raises the Stakes</strong></h3>
<p>Emerging technologies such as IVG will only amplify these procedural deficits. IVG would enable the creation of large numbers of embryos from somatic cells, increasing the frequency and complexity of decisions about storage, testing, and disposition. In some cases, abrupt legal reclassification could leave patients without access to what may be their only medically feasible pathway to genetically related parenthood or could render years of reproductive planning legally precarious. As reproduction becomes more regulated through licensing regimes, statutory definitions, insurance mandates, hospital oversight, and potential embryo registries, the consequences of abrupt legal reclassification by courts or legislatures grow more significant.</p>
<p>As reproductive technology governance becomes more structurally regulated, its insulation from individualized procedural protection becomes more consequential. When complex regulatory systems operate without mechanisms to manage reliance or provide transitional safeguards, the risk of systemic arbitrariness increases&mdash;not because of case-specific misjudgment, but because legal change leaves no room for mitigation.</p>
<p>Without mechanisms to surface contradictions in policy&mdash;such as promoting childbirth while chilling the technologies that enable it&mdash;the law risks becoming not only restrictive but internally incoherent. IVG does not simply expand scientific possibilities; it intensifies the need for governance structures that account for reliance, predictability, and the cumulative effects of regulatory change.</p>
<h3><strong>V. Conclusion: </strong><strong>Procedural Legitimacy and Reproductive Governance</strong></h3>
<p>Reframing the regulation of reproductive technologies as a procedural due process problem does not require resurrecting <em>Roe</em>. It requires only recognition that even where the state may regulate reproduction, the legitimacy of that regulation depends on how change is implemented. Even where regulatory authority shifts from administrative agencies to legislatures or courts, the underlying demand for procedural fairness does not disappear; it becomes more difficult to enforce and more consequential when absent. Procedural safeguards demand transparency, neutrality, and meaningful opportunities for affected parties to anticipate and respond to regulatory shifts&mdash;features that procedural justice research identifies as central to legal legitimacy. They obligate states to justify not only what they regulate, but how regulatory change is structured and whom it disrupts.</p>
<p>When these safeguards disappear, reproductive governance risks sliding from constitutional order toward arbitrary power. The erosion of procedural norms normalizes a vision of reproduction as an administrative permission rather than a domain structured by reasoned and predictable legal processes. Even where substantive reproductive rights are contested, the durability and legitimacy of reproductive regulation depend on procedures that respect reliance, mitigate abrupt disruption, and constrain arbitrariness.</p>
<p>Reproductive technology governance without procedural fairness does not simply narrow autonomy; it undermines the constitutional commitment to law as a system of reasoned and accountable decision-making.</p>
<h3><strong>References</strong></h3>
<p>[1] See Nat&rsquo;l Acads. of Scis., Eng&rsquo;g &amp; Med., <em>Heritable Human Genome Editing</em> 89&ndash;92 (2020).</p>
<p>[2] See Henry T. Greely, <em>The End of Sex and the Future of Human Reproduction</em> 1&ndash;20, 120&ndash;50 (Harvard Univ. Press 2016).</p>
<p>[3]<em> Dobbs v. Jackson Women&rsquo;s Health Org.</em>, 597 U.S. 215 (2022).</p>
<p>[4] See, e.g., Elizabeth Price Foley, <em>Dobbs and the Future of Substantive Liberty</em>, 64 Santa Clara L. Rev. 159 (2024).</p>
<p>[5] See, e.g., Erwin Chemerinsky, Constitutional Law: Principles and Policies &sect; 10.1 (6th ed. 2019).</p>
<p>[6] <em>Board of Regents v. Roth</em>, 408 U.S. 564, 569&ndash;70 (1972); <em>Mathews v. Eldridge</em>, 424 U.S. 319, 332&ndash;35 (1976).</p>
<p>[7] Tom R. Tyler, <em>Why People Obey the Law</em> (1990).</p>
<p>[8] <em>Dobbs v. Jackson Women&rsquo;s Health Org.,</em> 597 U.S. 215 (2022); see, e.g., Elizabeth Price Foley, <em>Dobbs and the Future of Substantive Liberty</em>, 64 Santa Clara L. Rev. 159, 181 (2024).</p>
<p>[9] See Guttmacher Inst., <em>State Policy Trends 2025: Full-Year Analysis</em> (Feb. 4, 2026), <a href="https://www.guttmacher.org/2025/12/state-policy-trends-2025-full-year-analysis" rel="noopener noreferrer" target="_blank">https://www.guttmacher.org/2025/12/state-policy-trends-2025-full-year-analysis</a>; Guttmacher Inst., <em>State Policy Trends: Midyear Analysis</em> (June 16, 2025), <a href="https://www.guttmacher.org/2025/06/state-policy-trends-midyear-analysis" rel="noopener noreferrer" target="_blank">https://www.guttmacher.org/2025/06/state-policy-trends-midyear-analysis</a>; Guttmacher Inst., <em>First Quarter 2024 State Policy Trends</em> (May 8, 2024), <a href="https://www.guttmacher.org/2024/05/first-quarter-2024-state-policy-trends" rel="noopener noreferrer" target="_blank">https://www.guttmacher.org/2024/05/first-quarter-2024-state-policy-trends</a>.</p>
<p>[10] U.S. Const. amend. XIV, &sect; 1; <em>Mathews v. Eldridge</em>, 424 U.S. 319, 332&ndash;35 (1976).</p>
<p>[11] See <em>Board of Regents v. Roth</em>, 408 U.S. 564, 569&ndash;70 (1972); <em>Perry v. Sindermann</em>, 408 U.S. 593, 601&ndash;02 (1972).</p>
<p>[12] See <em>Perry v. Sindermann</em>, 408 U.S. 593, 601 (1972).</p>
<p>[13] <em>Santosky v. Kramer</em>, 455 U.S. 745, 753&ndash;54 (1982); <em>Goldberg v. Kelly</em>, 397 U.S. 254, 267&ndash;71 (1970).</p>
<p>[14] <em>Bi-Metallic Inv. Co. v. State Bd. of Equalization</em>, 239 U.S. 441 (1915).</p>
<p>[15] See <em>Bi-Metallic Inv. Co. v. State Bd. of Equalization</em>, 239 U.S. 441, 445 (1915).</p>
<p>[16] Tom R. Tyler, <em>What Is Procedural Justice?</em>, 22 Law &amp; Soc&rsquo;y Rev. 103 (1988).</p>
<p>[17] <em>LePage v. Ctr. for Reprod. Med</em>., P.C., 2024 WL 1161240 (Ala. Feb. 16, 2024).</p>
<p>[18] The Alabama Supreme Court&rsquo;s 2024 decision in <em>LePage v. Center for Reproductive Medicine</em>, No. SC-2022-0579 (Ala. Feb. 16, 2024), exemplifies this dynamic.</p>
<p>[19] <em>Mathews v. Eldridge</em>, 424 U.S. 319, 335 (1976).</p>]]></content>
	<updated>2026-02-26T15:55:38+00:00</updated>
	<author><name>Bo Hyoung Lee</name></author>
	<source>
		<id>https://law.stanford.edu/blog/lawandbiosciences/</id>
		<link rel="self" href="https://law.stanford.edu/blog/lawandbiosciences/"/>
		<updated>2026-02-26T15:55:38+00:00</updated>
		<title>Law and Biosciences Blog - Stanford Law School</title></source>

	<category term="assisted reproductive technology"/>

	<category term="constitutional law"/>

	<category term="in vitro gametogenesis"/>

	<category term="legal legitimacy"/>

	<category term="medicine"/>

	<category term="procedural due process"/>

	<category term="procedural justice"/>

	<category term="reproductive governance"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-24:/280815</id>
	<link href="https://law.stanford.edu/2026/02/19/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026/" rel="alternate" type="text/html"/>
	<title type="html">Charley Moore – Axon Fusus &amp; Lightpost – CodeX Group Meeting – February 19, 2026</title>
	<summary type="html"><![CDATA[<p>Charley Moore, Product Manager at Axon, presented on the company&rsquo;s real time crime center tech...</p>]]></summary>
	<content type="html"><![CDATA[<p>Charley Moore, Product Manager at Axon, presented on the company&rsquo;s real time crime center technology Axon Fusus and Axon Lightpost, an automatic license plate reading (ALPR) camera.</p>
<p>Fusus is an open-platform system that integrates existing cameras, body worn cameras, drones, and other security assets into a single common operating picture, allowing law enforcement to act on real time data and AI-driven analytics.</p>
<p>Lightpost draws power from streetlights to capture vehicle and license plate information, and when combined with Fusus, enables officers to track suspect vehicles, build hotlists, and resolve incidents rapidly &mdash; as demonstrated by a beta program hit-and-run case resolved in 20 minutes that recovered $80,000 in illegal drugs.</p>
<p>The Q&amp;A covered topics including privacy compliance (governed by MOUs and local retention laws), the donor program that provides free Fusus cores to local businesses, the absence of facial recognition in the platform, and a subscription/hardware-based business model tailored to each agency&rsquo;s existing infrastructure.</p>
<figure aria-describedby="caption-attachment-559235"><img fetchpriority="high" decoding="async" src="https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026.jpg" alt="Charley Moore - Axon Fusus &amp; Lightpost - CodeX Group Meeting - February 19, 2026" srcset="https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026.jpg 1277w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-300x160.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-1024x546.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-768x410.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-1152x614.jpg 1152w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-150x80.jpg 150w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-220x117.jpg 220w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026.jpg 1277w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-300x160.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-1024x546.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-768x410.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-1152x614.jpg 1152w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-150x80.jpg 150w,https://law.stanford.edu/wp-content/uploads/2026/02/charley-moore-axon-fusus-lightpost-codex-group-meeting-february-19-2026-220x117.jpg 220w" sizes="(max-width: 1277px) 100vw, 1277px" referrerpolicy="no-referrer" loading="lazy"><figcaption>Axon Fusus</figcaption></figure>
<p><a href="https://youtu.be/7epfQkVcKRw" rel="noopener noreferrer" target="_blank">Watch Charley Moore&rsquo;s presentation at the CodeX Group Meeting</a></p>
<p><b>Transcript</b></p>
<p><b>Roland Vogl:</b><span> Today we have Charley Moore, who&rsquo;s the product manager at Axon and is in charge of Axon Lightpost. Axon is a leading company in the real time crime center space. That&rsquo;s an area and technology space we haven&rsquo;t covered much before. So we&rsquo;re really curious for you, Charley, to educate us a little bit about that space and tell us specifically what you&rsquo;re doing there, and maybe show us a little bit of the technology.</span></p>
<p><span>There are no other updates other than Future is coming up in two months. So if you haven&rsquo;t registered yet, please do so at codexfuturelaw.com. And with that, I will now turn it over to Charley.</span></p>
<p><b>Charley Moore:</b> <span>My name is Charley Moore. As Dr. Vogl said, I am the lead product manager for Axon Lightpost, which is a new ALPR &mdash; automatic license plate reading &mdash; solution that we launched in December. I had the privilege of meeting Dr. Vogl when I was actually representing Axon at the career fair back around September, and he invited me to present here. I could not be more grateful for the opportunity.</span></p>
<p><span>So I&rsquo;m looking forward to talking about Axon Fusus, which is where the majority of my work at Axon has revolved around, and then giving a little background about Axon itself.</span></p>
<p><span>Axon was founded as TASER International. The first product that came out was the TASER, then cameras, then body worn cameras. And now the majority of my work has been in the real time crime center space. That&rsquo;s what I&rsquo;m going to talk about mostly for this presentation.</span></p>
<p><span>I started at Axon as a mid-market enterprise executive, as an SDR on that team, working in security operations centers &mdash; selling the same technology that we use for real time crime centers to enterprise customers and private businesses, allowing them to share video footage and incidents directly with law enforcement, and using a lot of the same technology that law enforcement uses for their own operations.</span></p>
<p><span>From there, I became a product manager on the Axon Vehicle Intelligence team, helping develop our strategy around vehicle information for crime prevention. The two cameras we came out with are Axon Outpost and Axon Lightpost. Axon Outpost is a solar powered ALPR camera, and Axon Lightpost &mdash; which you can see here &mdash; is a streetlight-powered ALPR solution. It draws power from the street light, which allows you to power the camera and provide connectivity for high-speed, high-distance, very accurate vehicle capture and license plate information for crime prevention.</span></p>
<h3><b>What Is a Real Time Crime Center?</b></h3>
<p><span>The first real time crime center was established by the New York City Police Department in 2005. Real time crime centers can come in many forms &mdash; they can be as large as a dedicated room with multiple different screens, or as small as a laptop. The key to a real time crime center is the opportunity to take data and analytics, merge all of your different security operation assets into a single common operating picture, use that data and those analytics as efficiently as possible, and get that information to first responders in real time &mdash; so that they have as much situational awareness as possible heading into a situation.</span></p>
<h3><b>Axon Fusus</b></h3>
<p><span>Fusus is one vendor in the real time crime center space. Some of the advantages of Fusus:</span></p>
<p><span>Fusus is an open ecosystem. It allows you to work with your existing infrastructure &mdash; whatever you have &mdash; and merge it into a common Fusus operating picture. Fusus works with multiple different camera vendors. What you can see is a Fusus core box, which sits on a network and allows you to draw video streams from multiple different vendors. So whatever cameras you already have in place &mdash; whether you&rsquo;re an agency or a private business &mdash; it allows you to take your existing cameras and merge them into a common operating picture, along with all of your different security assets: body worn cameras, drones, anything you have for your security operations.</span></p>
<p><span>One of the other things Fusus allows you to do is use AI-driven workflows with your existing cameras. The Fusus core &mdash; which will fit in the palm of your hand &mdash; allows you to run AI analytics directly on your existing cameras, even if those cameras themselves don&rsquo;t have AI analytics. So when I get into the workflows of Lightpost, one of the things to think about is: if you don&rsquo;t have a license plate, you can still use Fusus to search for things like the car&rsquo;s make and model, shirt colors, and so on, using those AI analytics on the cameras themselves.</span></p>
<p><span>To summarize, Axon Fusus provides:</span></p>
<ul>
<li><span>A common operating picture</span></li>
<li><span>Real time emergency video access</span></li>
<li><span>More efficient use of personnel, because you know what&rsquo;s going on on scene</span></li>
<li><span>Enhanced precision policing</span></li>
<li><span>Fostered community collaboration</span></li>
<li><span>Cost savings, because Fusus works with your existing infrastructure</span></li>
</ul>
<h3><b>Example Workflow: Axon Lightpost</b></h3>
<p><span>I&rsquo;m going to dive into an example workflow for Lightpost &mdash; the ALPR camera that I am the product manager of &mdash; and talk through one of the workflows and cases that came out of our beta program.</span></p>
<p><span>This incident was a hit and run. A vehicle was driving, crashed into another car, and immediately sped off. The victims provided officers with a picture of the suspect&rsquo;s vehicle. As I noted, even if they hadn&rsquo;t had a clear image of the license plate &mdash; which in this case they did &mdash; we can still use Fusus to solve that crime, because instead of searching by license plate specifically, you can search by things like &ldquo;blue vehicle.&rdquo;</span></p>
<p><span>But in this case we had the license plate. So the officers entered that license plate into Fusus to receive an automatic alert on what&rsquo;s called a hotlist, which alerts when that vehicle is found on another Lightpost camera. From there, officers actually had a nearby traffic camera that was also integrated into Fusus. They were able to pull up that traffic camera, confirm that the incident took place, and then send that footage to the real time crime center in their command center &mdash; both to gather more evidence of the incident and to confirm the victim&rsquo;s statement.</span></p>
<p><span>Officers then performed an ALPR search, going back to see all the locations that vehicle had been captured by their previous Lightpost cameras. They performed that search and were alerted to the nearest location where that vehicle had been found in the city. From there, they searched one of their other nearby traffic cameras and found that vehicle turning into a parking lot. They then dispatched officers to that location and, in real time, were able to stream that body worn camera footage directly to Fusus to see that the situation was being resolved &mdash; helping the officers on scene understand what was going on, communicating with them, and recording the incident.</span></p>
<p><span>If you haven&rsquo;t heard of Axon &mdash; if you ever see body worn camera footage on YouTube in the future and look in the upper right-hand corner, you&rsquo;ll see a yellow triangle. That&rsquo;s the Axon logo. So you&rsquo;ll start to notice that a lot of body worn camera footage you come across is from an Axon body worn camera.</span></p>
<p><b>Results of this workflow:</b></p>
<ul>
<li><span>Recovered $80,000 worth of illegal drugs; a large quantity of illicit substances packaged for individual sale were removed from the streets.</span></li>
<li><span>Footage of the incident was used to confirm the victim&rsquo;s statement, resulting in an evidence-backed arrest.</span></li>
</ul>
<p><span>Here&rsquo;s a direct quote from the officer: </span><i><span>&ldquo;Without the technology available to us, the case would have likely gone unsolved, as the driver was not the registered owner of the suspect vehicle.&rdquo;</span></i><span> In this case, had we not resolved that incident within 20 minutes &mdash; from initial call to suspect apprehension &mdash; and had footage of where that vehicle traveled, it would have been difficult to confirm that the person in the vehicle was at the location when the incident took place. But because we had all this information available, it allowed the officers to make sure they were arresting the right person, understand the situation as they were heading into the response, and proceed from there.</span></p>
<h3><b>Video: Royal Bahamas Police Department</b></h3>
<p><span>Before Q&amp;A, I&rsquo;ll play a short video from the Royal Bahamas Police Department, which I think is very helpful in showing the value of Fusus through a real example of them using it to capture a shooting suspect.</span></p>
<p><b>Royal Bahamas PD representative:</b><span> You want an agency that can act fast when crime happens. You need people in a command center letting those mobile units know &mdash; &ldquo;hey, this is happening, a shooting is happening, a robbery is happening.&rdquo; That also helps with community trust. Once the community sees that police are responding fast enough, that is what our real time crime center was established for.</span></p>
<p><span>Before Axon Fusus, we had all these different platforms spread out. You would have CCTV cameras on one platform, another system on another. But now with Axon Fusus, everything is integrated into one. You have body cams, ShotSpotter, our CCTV program &mdash; everything is on one platform. It makes the response time a lot quicker.</span></p>
<p><span>For instance, we had a matter in 2024 with a shooting incident. ShotSpotter went off in Fusus. We clicked on the cameras, which pulls all cameras within a mile radius of where the shot happened. Of course, we don&rsquo;t know exactly what happened just yet, but we see there&rsquo;s a white Honda running past a red light behind the victim&rsquo;s vehicle. We were able to get the license plate &mdash; and not only alert control on how the vehicle looks, we were able to broadcast a photo of the vehicle in Fusus so they could actually see the vehicle. So not only can we get a description over the radio, we&rsquo;re actually seeing it on our mobile device exactly how the car looks. And within an hour or two, that suspect was arrested.</span></p>
<p><span>Technology revolutionizes policing, enhancing efficiency, accountability, and crime prevention through surveillance, analytics, and real time data. We balance safety and hospitality through crime prevention, rapid response, and strategic policing to protect all, while upholding the Bahamas&rsquo; welcome and reputation.</span></p>
<p><b>Charley Moore:</b><span> While that video was from the Bahamas, I think it really shows the value of real time information in precision policing and policing operations, and the value that Lightpost and other security assets provide. With that, I&rsquo;ll turn it over to questions. Thank you again for your time.</span></p>
<h2><b>Q&amp;A</b></h2>
<p><b>Roland Vogl:</b><span> This is super interesting. I have a couple of questions. First, how many cameras are typically deployed in a city that you provide, and how much information is coming in from other sources? The video mentioned CCTV, and sometimes you hear about feeds from Ring cameras or gas stations. How is all this information being brought together and put in front of the police officers?</span></p>
<p><b>Charley Moore:</b><span> Great questions. For the first &mdash; how many cameras &mdash; it&rsquo;s really up to the agency, what their budget is, and what they&rsquo;re looking for. We&rsquo;ve done pilot programs with as few as four cameras, just to capture the entrances and exits of a city, so that if something happens in their location, they can alert a nearby jurisdiction that a suspect is heading their way. Other agencies can deploy up to 50 or many more cameras. It really just depends on the size of the city and the size of the agency.</span></p>
<p><span>For the second part &mdash; how many assets can be in a Fusus panel &mdash; it can be every officer&rsquo;s body worn camera, every drone, every single security asset. There&rsquo;s no limit in Fusus; Fusus doesn&rsquo;t charge by the number of assets you integrate. It&rsquo;s pretty easy to integrate specific assets that have existing integrations with Fusus. So there&rsquo;s really no limit. It&rsquo;s up to the agency for how much information they want in Fusus, whether they want to focus on specific security assets, and what their budget is for their real time policing operations.</span></p>
<p><b>Roland Vogl:</b><span> What about other intermediaries that try to aggregate information and then sell it to the police? For example, if a private security company has installed a camera at a gas station and a crime occurs there &mdash; is there an agreement with the police to have a feed, or does the police have to go there and subpoena or request the video?</span></p>
<p><b>Charley Moore:</b><span> Yes, there are some companies that aggregate that data. That&rsquo;s actually related to our Fusus donor program. One of the things that every agency gets when they sign up with Fusus is an allotment of those Fusus core devices that I talked about earlier. The agency can actually give those cores for free to local businesses &mdash; especially gas stations, convenience stores, stores that see a lot of crime and are very affected by it. They sign an MOU with the police department that details when and where the police can access that footage. From there, instead of having to manually share footage with law enforcement, when you call law enforcement they can immediately access that footage, see what&rsquo;s going on, help with community safety, and know exactly what situation they&rsquo;re getting into when they respond.</span></p>
<p><b>Roland Vogl:</b><span> What about compliance frameworks that govern Fusus &mdash; specifically about legal document automation, chain of custody, and evidence disclosure workflows downstream from Fusus data.</span></p>
<p><b>Charley Moore:</b><span> Right now, sharing video feeds with Fusus is governed by a memorandum of understanding signed between the business sharing their video feeds and the police department. There are also community council meetings when deploying Fusus, and it&rsquo;s publicized so that the community has input. The community can see the privacy safeguards in place, and there&rsquo;s a lot of documentation to make sure Fusus is very transparent about when and where the police can access video footage and how they can use it. The primary mechanism is that MOU between the business, homeowner, or private citizen who wants to share video directly with the police department.</span></p>
<p><b>Roland Vogl:</b><span> What about privacy rules, what does compliance mean in practice? The information picked up isn&rsquo;t always of a specific crime, so there&rsquo;s still a lot of other information that could be otherwise sensitive. What&rsquo;s done with that information &mdash; is it stored, then deleted? What practices are in place?</span></p>
<p><b>Charley Moore:</b><span> That&rsquo;s a really good question. Every state and county has its own rules and restrictions on when the video can be deleted, how it can be deleted, and how long it has to be retained. So before deploying Fusus, it&rsquo;s important to understand what those agency&rsquo;s rules are. Fusus was founded &mdash; I believe in 2019 &mdash; but video feeds for law enforcement have been around for a while, so most agencies are already familiar with the rules around storage and use. Fusus is able to remain compliant by automatically setting video feeds to be retained only for the duration that the county specifies, and we figure that out before deployment to make sure we&rsquo;re in compliance with all local ordinances.</span></p>
<p><b>Roland Vogl:</b><span> It sounds like body cameras have brought a lot of clarity to situations that might otherwise have been unclear, and some of those situations have led to significant public incidents. Are there rules requiring that body cam footage be shared with the public or be accessible through public records requests?</span></p>
<p><b>Charley Moore:</b><span> Yes. There are a lot of different rules depending on the county, but in some counties all of that footage has to be stored for, let&rsquo;s say, 30 days. If a member of the public makes a Freedom of Information Act request, they can request that footage directly from the agency. Within Fusus, you can go in, select the video clip from a specific camera that you want to share, and download it to your computer. If it&rsquo;s a private company using Fusus, you can share that video clip directly with them &mdash; for example, if it&rsquo;s a shoplifting suspect that Walmart is looking for. But if it&rsquo;s a request from the general public, you download the video clip, save it, and then share it securely in response to the Freedom of Information Act request.</span></p>
<p><b>Roland Vogl:</b><span> How does the system manage legal consent. Is the system used to perform facial recognition?</span></p>
<p><b>Charley Moore:</b><span> Axon does not do facial recognition today in the use of Fusus. When I talk about object detection, I&rsquo;m talking about things like shirt color, shoes, pants &mdash; but today Fusus does not provide facial recognition.</span></p>
<p><b>Roland Vogl:</b><span> Benjamin is asking about the Fourth Circuit Court of Appeals saying that blanket surveillance &mdash; specifically all cameras &mdash; violates the Fourth Amendment right to privacy.</span></p>
<p><b>Charley Moore:</b><span> I would have to look up that case specifically and do some research. To be totally transparent, I have not heard of that specific case. I&rsquo;d definitely want to look into it and am happy to connect offline if you want to have a discussion, because I&rsquo;d love to learn more about that. But from what I can see, we&rsquo;re fully in compliance with all local laws.</span></p>
<p><b>Roland Vogl:</b><span> Most of your customers are police departments, but you&rsquo;re also selling to private organizations?</span></p>
<p><b>Charley Moore:</b><span> Yes, exactly. Security operations centers &mdash; using the same technology that law enforcement uses, but applying it to your stores and businesses to merge all of your operations and security assets into a common operating picture.</span></p>
<p><b>Roland Vogl:</b><span> What&rsquo;s the business model &mdash; is this a subscription sold to police departments?</span></p>
<p><b>Charley Moore:</b><span> For Lightpost, it&rsquo;s part of a five-year contract and involves buying the hardware. So you&rsquo;re purchasing Lightpost, you own the hardware, and then the service is billed annually. It depends on the specific products. With Fusus, it used to be that you paid per stream &mdash; for however many cameras you have, you paid for each of those cameras. Today, you&rsquo;re buying packages depending on how large a real time crime center you&rsquo;re building out with Fusus. But not everything is bundled together &mdash; Fusus doesn&rsquo;t necessarily come with Lightpost or another camera. It&rsquo;s really about building and mixing and matching the technology for each agency&rsquo;s needs. A lot of that also depends on what technology the agency already has, because there&rsquo;s no point in buying extra cameras if you already have high-functioning cameras in place. We come in, work with those existing cameras, and bring them into Fusus the same as any cameras you might purchase directly.</span></p>
<p><span><b>Roland Vogl:</b> How does the software work &mdash; is it machine learning, visual recognition algorithms? What&rsquo;s the tech stack used to match and track?</span></p>
<p><b>Charley Moore:</b><span> That depends on the product. For Lightpost, it&rsquo;s a third-party algorithm that runs on the device today. For Outpost, it&rsquo;s a proprietary algorithm that runs on device. And then for standard video cameras, those have their own AI algorithms that run on the device and also do some processing in the cloud. They allow you to take those video feeds through what&rsquo;s called RTSP &mdash; a standardized protocol for video feeds &mdash; take that video feed, run it through the core, process it with AI analytics, and then send that video to Fusus.</span></p>
<p><b>Roland Vogl:</b><span> All right, well, thank you so much, Charley.&nbsp;</span></p>
<p><b>Charley Moore:</b><span> I&rsquo;m happy to continue the conversation &mdash; I love this technology and I&rsquo;m really passionate about all of it. Happy to connect with anybody and talk about Lightpost and policing.</span></p>
<p>&nbsp;</p>]]></content>
	<updated>2026-02-19T21:37:58+00:00</updated>
	<author><name>CodeX</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-19T21:37:58+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="codex"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-20:/280433</id>
	<link href="https://law.stanford.edu/2026/02/20/consent-all-the-way-down-why-the-mello-framework-for-ai-disclosure-in-healthcare-fails-on-its-own-terms/" rel="alternate" type="text/html"/>
	<title type="html">Consent All the Way Down: What Healthcare AI Disclosure Inherits from Privacy Law</title>
	<summary type="html"><![CDATA[<p>Abstract
Mello, Char, and Xu propose in JAMA a two-factor framework for deciding when healthcare org...</p>]]></summary>
	<content type="html"><![CDATA[<h5>Abstract</h5>
<p>Mello, Char, and Xu propose in <i>JAMA</i> a two-factor framework for deciding when healthcare organizations should notify patients about AI tools or seek their consent. The framework asks organizations to assess risk of harm and patient agency, then sort AI tools into three bins, namely consent, notification, or neither. I argue that the framework rests on three premises that undermine its own architecture. First, it treats &ldquo;human in the loop&rdquo; oversight as a reliable error-interception mechanism while simultaneously cataloging the reasons it is not. Second, it quantizes patient agency into a binary when agency exists on a gradient, assigning patients the role of quality-control agents while arguing elsewhere that patients cannot absorb more information. Third, it presumes that healthcare organizations possess the evaluative infrastructure to perform the risk assessments the framework demands. These are not independent weaknesses. They are surface expressions of a deeper structural problem. The Mello framework is a consent architecture, and consent architectures fail when they assume human capacities that do not exist at scale. Privacy law scholarship has already diagnosed this failure and begun developing alternatives. Healthcare AI governance should reckon with the same insight. I acknowledge that diagnosing the consent failure is easier than building the alternative, and that disclosure sometimes imposes real costs on patients. But these complications do not rescue a framework whose premises do not hold.</p>
<h5>I. The Source and Its Stakes</h5>
<p>Michelle Mello, Danton Char, and Sonnet Xu published &ldquo;<a href="https://watermark02.silverchair.com/jama_mello_2025_pp_250012_1755795638.28423.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAzEwggMtBgkqhkiG9w0BBwagggMeMIIDGgIBADCCAxMGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMvrKn0M8uYg10-566AgEQgIIC5DyQDFop9u2ZArsWh6_RGVgrYSc4aVhn4bL3FenanYT0B17xi2GRTufPn-3-0Yc9jyf6GSBCcdd9amCoTC9VpdXOViM1dgQ_CzOn-95gPch4La9xSiAg9wbhHTAkXs4QvsYtGNwUT8iNsiOC21eCbi3YcDP938jR7PsQmKXI_PLrZ2wufkmx9dyCBxCJBmAHXHrvvaAamO6u8rKNDwnypxhfP-s41JBEN7rvim5Az26pwbpg9phmcXg_s6rZb1L2GrrwWCtj7_0U0Vum1wk86vPhNI91wbXEETD2xTaOokHm9XG8o-KKOOYbj8CZBWqgzmjYv-dtWQtkU6W-w6v954QX3YPJBJZZczlpf5U3KIvdDZEP-3c-_SdC3x0p6EmsgYKAZ56IfwhtNpNhLXMya9kpTlf6AXHf2t7b6VUwl7jfqvHqm48Q5HiuGsPTLiPhGMrzoEEVc0abrCW4T47_6rvQMt19X2mfC2I-oIaaSJW5G8CYC22CLAEhNTzJi_Mh_PzQYTLyfdoLQoOhPLd2oHevTGnBxeP9884B5Kh3nsvN7vskiP91oQGaGBSHsDaYTkMcBQVBuQZowG26v8LqGV1-NZ557EDweOqWKY4tYPoD081vZw0Xp1fVVExoUupUfHXM7mKRrC3fJQH7H9d6xUpPW32GVZQ3KBOfoyLyjijNaTNbrwt1nrGu0MsAlqSEZdgmvRmV9U-araqkrsfbaM8W-sv8d8IBAnb-qNXTIwIX-5qu1P_AtFvJPWCU4EJgWM0gGzptNJFbSoyA9RE5pSYT62AjUvdBFlB7F8E2wvUp8KllATnH54cvY0f5tH-5KofCv4wnNRUj27ckRy2ai2NUKN98K8Jo0Wb2jXPAwsWt_zQkQElzkeq1p8QV85UcEdZ1FOUjzO1a9mkMdMgRsn7ZAkzGQTmbgD0vHHl12ezYd5eWq3oJQaLAQRaW7uvLvXdn8IhjV1Z2ATlqgje2pKQP5zhN" rel="noopener noreferrer" target="_blank"><i>Ethical Obligations to Inform Patients About Use of AI Tools</i></a>&rdquo; in <i>JAMA</i> in September 2025. AI tools are proliferating across healthcare settings, and practitioners need guidance on what to tell patients about them. The authors propose a two-factor framework organized around (1) the risk that using the tool could cause physical harm and (2) the extent to which patients can exercise agency in response to a disclosure. Tools that score high on both factors warrant consent. Tools that score high on one warrant notification. Tools that score low on both warrant neither.</p>
<p>The framework contains structural weaknesses that deserve direct treatment. I will focus on three, then explain what connects them.</p>
<h5>II. The Circular Loop</h5>
<p>The authors build their framework on the premise that human oversight reduces residual risk to patients. They write that &ldquo;most tools in use today entail a human reviewing and acting on model output,&rdquo; and that &ldquo;the key question therefore is whether the residual risk to patients is low once one considers the likelihood that the human in the loop will successfully detect errors.&rdquo;</p>
<p>This is the load-bearing wall of the entire architecture. It determines which tools fall into the &ldquo;neither consent nor notification&rdquo; category. If human oversight reliably catches errors, residual risk is low, and disclosure becomes unnecessary.</p>
<p>And then the authors undermine it themselves. In the very next section, they acknowledge that &ldquo;capacity constraints, automation bias, and users&rsquo; unfamiliarity with a tool&rsquo;s weaknesses may undercut efficacy.&rdquo; Automation bias is one of the most well-documented phenomena in human-computer interaction research. Clinicians who know an AI has flagged or not flagged a condition routinely defer to the machine&rsquo;s judgment. The premise that human oversight will &ldquo;successfully detect errors&rdquo; is precisely the premise that automation bias scholarship has spent two decades dismantling.</p>
<p>The framework says, in effect, that disclosure is unnecessary when a human in the loop will catch errors, and then concedes that humans in the loop frequently do not catch errors. If the human-in-the-loop assumption holds only sometimes, the entire &ldquo;neither&rdquo; category is unstable. Some of the tools the authors place in that category belong there. Some do not. And the framework provides no mechanism for distinguishing which is which.</p>
<h5>III. The Agency Paradox</h5>
<p>The second factor in the framework asks whether patients have a &ldquo;meaningful opportunity to exercise agency.&rdquo; The authors identify two forms. Patients might opt out of the AI tool, or patients might alter their behavior in response to knowing AI is involved.</p>
<p>The second form does important work. It is the basis for recommending notification for AI-drafted patient emails and AI-generated clinical summaries. The authors argue that a patient who knows an email was AI-drafted &ldquo;may be more likely to question something that seems odd,&rdquo; and that a daughter who knows a nursing summary note relating to her mother was AI-generated &ldquo;may be more likely to log on to the electronic health record, check the note for errors and omissions, and alert the incoming nurse.&rdquo;</p>
<p>Earlier in the article, however, the authors argue against broad disclosure partly because &ldquo;patients who want to be kept informed about their care in principle may struggle with the information overload that a hospital admission entails.&rdquo;</p>
<p>These two positions are in tension. You cannot simultaneously argue that patients are too overwhelmed to process more information about AI tools and that informed patients will serve as effective quality-control agents for AI-generated communications. The daughter expected to check an AI-generated nursing note for errors needs the medical literacy to recognize what constitutes an error, the time and inclination to log into the electronic health record, and an understanding of what the note should contain. The framework assigns her a role it has given no reason to believe she can perform.</p>
<p>The deeper problem is that the framework treats agency as binary. Either patients can opt out or they cannot. Either they can alter their behavior meaningfully or they cannot. But agency exists on a gradient. A patient told about an AI tool but lacking the expertise to evaluate its output has more agency than one told nothing, but less than the framework assumes. A more honest treatment would acknowledge that the &ldquo;notification&rdquo; category rests on aspirational rather than demonstrated patient capacity.</p>
<h5>IV. The Missing Institutional Competence Question</h5>
<p>The framework instructs healthcare organizations to assess, for each AI tool, &ldquo;(1) the risk that the tool poses, (2) the likelihood that errors will reach patients without being intercepted, and (3) the severity of the harm that could result.&rdquo;</p>
<p>Who performs this assessment? With what data? And using what methodology?</p>
<p>Most healthcare organizations are still struggling with basic AI governance, the setting of AI governance committees notwithstanding (that&rsquo;s window dressing). They lack the technical personnel to audit algorithmic performance across patient subgroups, the data pipelines to monitor error rates in production, and the institutional processes to translate risk assessments into disclosure policies applied consistently. These are not speculative deficiencies. A 2025 CHIME Foundation <a href="https://chimecentral.org/chime/resource-press-release/ai-adoption-survey-reveals-healthcares-governance-gap-drive-toward-agentic-usage" rel="noopener noreferrer" target="_blank">survey found</a> that only 8% of healthcare organizations described themselves as &lsquo;very confident&rsquo; in their ability to identify emerging AI risks, and a little more than half had a formal process requiring approval before AI implementation. There is a certain irony in a framework that worries about &ldquo;perfunctory and legalistic&rdquo; consent being implemented by institutions that may apply the framework itself in a perfunctory and legalistic manner. An organization that lacks the infrastructure to assess algorithmic risk will default to the path of least resistance. And the path of least resistance in this framework is the &ldquo;neither&rdquo; category, because it requires no action. The framework&rsquo;s own structure creates an incentive to underassess risk.</p>
<h5>V. The Disclosure-Harm Tradeoff</h5>
<p>Before connecting these weaknesses, I want to engage the strongest argument the Mello framework has in its favor, because it is genuinely difficult.</p>
<p>The authors argue that disclosure can paradoxically harm patients. They cite evidence that patients perceive AI-drafted messages as more empathetic than physician-drafted ones, but rate those same messages significantly lower once told AI was involved. When AI-augmented care outperforms the alternative, a consent regime may result in clinicians &ldquo;having to deliver suboptimal treatment.&rdquo; This is a real cost. It is not hypothetical. And any critique of the framework must reckon with it.</p>
<p>I think the argument is correct on its own terms but insufficient as a foundation for the framework&rsquo;s permissive categories. The disclosure-harm evidence establishes that some patients, told about some AI tools, will make choices that leave them worse off. That is a genuine tradeoff. But the framework uses this tradeoff to justify a &ldquo;neither&rdquo; category that sweeps in tools where the calculus is far less clear. The evidence that disclosure harms patients comes from specific contexts (patient messaging, mammography interpretation) and cannot be generalized to every tool the framework places in the &ldquo;neither&rdquo; bin without further empirical work. The framework treats a finding about particular tools as a license for institutional silence across categories.</p>
<p>Moreover, the cost of disclosure-induced suboptimal choices must be weighed against the cost of systematic under-disclosure by organizations that default to the category requiring no action. The former cost is visible and measurable. The latter is diffuse and delayed, which makes it harder to track but not less real.</p>
<h5>VI. The Consent Substrate</h5>
<p>The three weaknesses in Sections II through IV are not independent. They share a common root. The Mello framework is, essentially, a consent architecture. It assumes that if organizations assess risk and disclose appropriately, patients will exercise informed agency. The three failures are symptoms of a condition that privacy law scholarship has already diagnosed.</p>
<p>The notice-and-consent paradigm in privacy law fails at four sequential stages. Reading is impossible given the volume of policies. Comprehension is unattainable given their complexity. Evaluation is foreclosed because users lack the technical expertise to assess risk. And action is impossible because users face take-it-or-leave-it terms. Even ambitious regulatory frameworks like the CCPA have failed to remedy these defects, because rights are rendered worthless when they cannot be exercised. The evidence base for these claims is substantial. Carnegie Mellon researchers calculated in 2008 that reading the privacy policies an average American encounters would require roughly 30 working days per year. Literacy surveys show nearly half of American adults lack the reading level these policies demand. CCPA exercise rates remain remarkably low years after implementation.</p>
<p>Empirical work on patient attitudes toward AI disclosure exists and is growing, but there remains almost no empirical measurement of the structural variables the analogy requires, such as how many AI tools a typical admission implicates, whether patients in practice comprehend AI disclosure forms, or how often patients exercise opt-out rights when offered, and I want to be precise about that gap. But the structural parallels are strong enough to warrant concern. Reading is impossible when a patient&rsquo;s care implicates dozens of AI tools. Comprehension is unattainable when evaluating algorithmic risk requires expertise patients do not possess. Evaluation is foreclosed because patients cannot assess whether a human in the loop is actually catching errors. And action is impossible for operational AI tools that patients cannot opt out of. Empirical work specifically measuring patient comprehension of AI disclosure forms, and patient exercise rates when opt-outs are offered, would test whether these parallels hold quantitatively. But the structural logic does not depend on the numbers being identical. It depends on the same mismatch between assumed and actual human capacity.</p>
<p>Privacy law scholarship has begun moving past this impasse. The emerging recognition is that when consent is structurally impossible, effective protection requires delegation to intermediaries capable of acting at scale on behalf of individuals. Whether that intermediary takes the form of an <a href="https://dx.doi.org/10.2139/ssrn.6173424" rel="noopener noreferrer" target="_blank">AI agent with limited legal capacity</a>, a fiduciary with enforceable duties, or some other institutional design, the underlying insight is the same. The evaluative function that consent regimes assign to individuals must be relocated to systems designed to perform it.</p>
<p>I am aware that identifying the consent failure is easier than building the alternative. An intermediation model for healthcare AI governance raises questions about cost, access, liability, and regulatory design that this commentary cannot resolve. Healthcare is a domain where regulatory complexity, liability exposure, and patient vulnerability are all higher than in consumer data protection. The intermediaries that privacy law scholarship envisions do not yet exist in healthcare, and constructing them is an enormously difficult institutional project.</p>
<p>But the first step is recognizing that the current framework rests on assumptions that do not hold. The Mello framework asks patients to do what privacy law has demonstrated individuals cannot do, namely evaluate institutional disclosures about complex algorithmic systems and exercise meaningful agency in response. A healthcare AI governance framework that takes the consent literature seriously would ask not &ldquo;what should patients be told&rdquo; but &ldquo;what institutional structures ensure that someone with the competence to evaluate algorithmic risk is actually doing so on the patient&rsquo;s behalf.&rdquo;</p>
<h5>VII. What the Framework Does Not Say</h5>
<p>The Mello framework assumes human oversight that automation bias research calls into question. It assumes patient agency that information overload scholarship undermines. It assumes institutional capacity that healthcare governance surveys consistently find lacking. And it inherits these assumptions from a consent paradigm whose structural failure is no longer a matter of speculation but of accumulated evidence.</p>
<p>A disclosure framework is only as strong as the institutions that implement it, and a consent architecture is only as strong as the humans it expects to exercise consent. Without intermediation, the elegant two-factor matrix becomes a mechanism for sorting AI tools into the category that requires the least organizational effort. That is not a governance framework. That is a permission structure for opacity.</p>]]></content>
	<updated>2026-02-20T16:29:10+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-20T16:29:10+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="ai healthcare"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-18:/280281</id>
	<link href="https://www.gautrais.com/blogue/2026/02/18/au-nouveau-brunswick-les-renseignements-personnels-de-plus-de-350-menages-envoyes-a-la-mauvaise-adresse/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=au-nouveau-brunswick-les-renseignements-personnels-de-plus-de-350-menages-envoyes-a-la-mauvaise-adresse" rel="alternate" type="text/html"/>
	<title type="html">Au Nouveau-Brunswick, les renseignements personnels de plus de 350 ménages envoyés à la mauvaise adresse</title>
	<summary type="html"><![CDATA[<p>Roger Nzouetchep Ketat est &eacute;tudiant dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2...</p>]]></summary>
	<content type="html"><![CDATA[<p><strong><a href="https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14.png" rel="noopener noreferrer" target="_blank"><img fetchpriority="high" decoding="async" src="https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14-475x573.png" alt="" srcset="https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14-475x573.png 475w,https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14-975x1176.png 975w,https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14-768x926.png 768w,https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14-725x874.png 725w,https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14.png 1116w,https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14-475x573.png 475w,https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14-975x1176.png 975w,https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14-768x926.png 768w,https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14-725x874.png 725w,https://www.gautrais.com/files/sites/185/2026/02/Capture-decran-le-2026-02-18-a-16.37.14.png 1116w" sizes="(max-width: 219px) 100vw, 219px" referrerpolicy="no-referrer" loading="lazy"></a>Roger Nzouetchep Ketat est &eacute;tudiant dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)</strong></p>
<p><strong>La protection des renseignements personnels constitue un enjeu majeur pour les organisations, publiques comme priv&eacute;es, dans un contexte caract&eacute;ris&eacute; par la num&eacute;risation accrue des services et la circulation importante de l&rsquo;information. Afin de pr&eacute;venir et de limiter les cons&eacute;quences des incidents de confidentialit&eacute;, plusieurs &Eacute;tats, notamment parmi les pays d&eacute;velopp&eacute;s, ont proc&eacute;d&eacute; &agrave; la modernisation de leur cadre l&eacute;gislatif ou &agrave; l&rsquo;adoption de nouvelles mesures en mati&egrave;re de protection des renseignements personnels. Ces &eacute;volutions ont entra&icirc;n&eacute; un renforcement des obligations des organisations, tant publiques que priv&eacute;es, en mati&egrave;re de confidentialit&eacute;, de s&eacute;curit&eacute; de l&rsquo;information et de reddition de comptes.</strong></p>
<p>Dans ce contexte, les incidents de confidentialit&eacute; demeurent une pr&eacute;occupation constante. &Agrave; titre d&rsquo;exemple, <a href="https://ici.radio-canada.ca/nouvelle/2226576/assurance-maladie-erreur-protection-donnees" rel="noopener noreferrer" target="_blank">un incident r&eacute;cent survenu au Nouveau&#8209;Brunswick, impliquant l&rsquo;envoi par erreur les num&eacute;ros d&rsquo;assurance maladie de plus de 350 m&eacute;nages &agrave; une mauvaise</a> adresse, a compromis la confidentialit&eacute; de nombreuses personnes. Selon l&rsquo;article du journal, le gouvernement du Nouveau-Brunswick explique que certaines personnes</p>
<blockquote><p>&laquo;<em>&nbsp;</em><em>pourraient avoir re&ccedil;u par la poste des renseignements sur l&rsquo;assurance-maladie appartenant &agrave; d&rsquo;autres personnes en m&ecirc;me temps que les leurs&nbsp;&raquo;</em>.</p></blockquote>
<p>Cet &eacute;v&eacute;nement met en lumi&egrave;re l&rsquo;importance d&rsquo;une gestion rigoureuse et conforme des renseignements personnels.</p>
<p>Le num&eacute;ro d&rsquo;assurance maladie est consid&eacute;r&eacute; comme un renseignement personnel sensible en raison de son lien direct avec la sant&eacute; et l&rsquo;intimit&eacute; des personnes. Il n&eacute;cessite donc une protection &eacute;lev&eacute;e afin &eacute;viter l&rsquo;usurpation d&rsquo;identit&eacute; ou l&rsquo;acc&egrave;s non autoris&eacute; &agrave; des dossiers m&eacute;dicaux. La communication de ce num&eacute;ro &agrave; des tiers non autoris&eacute;s constitue, selon la l&eacute;gislation du Nouveau-Brunswick, une atteinte non justifi&eacute;e &agrave; la vie priv&eacute;e au sens de l&rsquo;art. 21(2) de la &nbsp;.</p>
<p>Au Nouveau Brunswick, notamment dans le secteur public, d&egrave;s qu&rsquo;une atteinte &agrave; la vie priv&eacute;e est pr&eacute;sum&eacute;e ou d&eacute;couverte, l&rsquo;organisme vis&eacute; doit prendre <a href="https://www.gnb.ca/fr/gouv/acces-information-et-vie-privee/ressources-gouv-organismes-publics/gestion-des-atteintes.html" rel="noopener noreferrer" target="_blank">un certain nombre de mesures</a>&nbsp;:</p>
<p>Dans un premier temps, il doit <strong>limiter l&rsquo;atteinte</strong>. Cela consiste, entre autres, &agrave; signaler imm&eacute;diatement l&rsquo;incident au responsable de la protection de la vie priv&eacute;e, &agrave; proc&eacute;der &agrave; une premi&egrave;re &eacute;valuation, &agrave; prendre des mesures imm&eacute;diates pour mettre fin &agrave; l&rsquo;atteinte et emp&ecirc;cher toute nouvelle communication des renseignements personnels, &agrave; s&eacute;curiser et, lorsque possible, &agrave; r&eacute;cup&eacute;rer les renseignements personnels d&eacute;j&agrave; communiqu&eacute;s, ainsi qu&rsquo;&agrave; d&eacute;terminer les personnes qui doivent &ecirc;tre inform&eacute;es au sein de l&rsquo;organisation.</p>
<p>Dans un deuxi&egrave;me temps, l&rsquo;organisme doit proc&eacute;der &agrave; <strong>une &eacute;valuation des risques</strong>. Il s&rsquo;agit notamment d&rsquo;identifier le type de renseignements personnels concern&eacute; par l&rsquo;atteinte et d&rsquo;en &eacute;valuer le niveau de sensibilit&eacute;, de d&eacute;terminer la cause et l&rsquo;ampleur de l&rsquo;atteinte, le nombre de personnes touch&eacute;es ainsi que les dommages pr&eacute;visibles pour les personnes concern&eacute;es.</p>
<p>Troisi&egrave;mement, l&rsquo;organisme doit <strong>fournier un avis</strong>. Cette &eacute;tape consiste, entre autres, &agrave; informer les personnes concern&eacute;es ainsi que l&rsquo;<a href="https://ombudnb.ca/fr/information-et-vie-privee/#Ressources" rel="noopener noreferrer" target="_blank">Ombud</a>, organisme charg&eacute; notamment d&rsquo;enqu&ecirc;ter sur les plaintes du public en mati&egrave;re d&rsquo;atteinte &agrave; la vie priv&eacute;e.</p>
<p>Quatri&egrave;mement, il doit mettre en place des mesures visant &agrave; <strong>pr&eacute;venir de futures atteintes</strong>. Cela consiste &agrave; formuler les recommandations &agrave; la suite d&rsquo;une enqu&ecirc;te afin d&rsquo;&eacute;viter qu&rsquo;une atteinte similaire ne se reproduise, ainsi qu&rsquo;&agrave; &eacute;laborer un plan de mise en &oelig;uvre des recommandations et des mesures correctives.</p>
<p>Finalement, le <a href="https://laws.gnb.ca/fr/document/rc/2010-111" rel="noopener noreferrer" target="_blank">R&egrave;glement g&eacute;n&eacute;ral</a> pris en vertu de la LDIPVP, exige &agrave; son alin&eacute;a 4.2(4)b), que les organismes publics tiennent <strong>un registre</strong> de chaque atteinte v&eacute;ritable &agrave; la vie priv&eacute;e ainsi des mesures correctives prises.</p>
<p>Le ministre responsable de Service Nouveau-Brunswick, rapporte l&rsquo;article, assure que le probl&egrave;me ayant caus&eacute; cette erreur a &eacute;t&eacute; corrig&eacute; et que des d&eacute;marches visant &agrave; att&eacute;nuer les r&eacute;percussions de l&rsquo;incident sur les personnes concern&eacute;es ont &eacute;t&eacute; entreprises. Il est &eacute;galement pr&eacute;cis&eacute; que toutes les personnes touch&eacute;es seront contact&eacute;es par l&rsquo;assurance maladie.</p>
<p>&Agrave; des fins de comparaison, dans l&rsquo;hypoth&egrave;se o&ugrave; l&rsquo;incident serait survenu au Qu&eacute;bec, celui-ci aurait &eacute;t&eacute; assujetti au r&eacute;gime juridique &eacute;tabli par les articles 63.8 &agrave; 63.11 ainsi que par l&rsquo;article 127.2 de la <a href="https://www.legisquebec.gouv.qc.ca/fr/document/lc/a-2.1" rel="noopener noreferrer" target="_blank"><em>Loi sur l&rsquo;acc&egrave;s aux documents des organismes publics et sur la protection des renseignements personnels (RLRQ, c. a-2.1)</em></a> (Loi sur l&rsquo;acc&egrave;s), de m&ecirc;me que <a href="https://www.cai.gouv.qc.ca/protection-renseignements-personnels/information-entreprises-privees/incidents-confidentialite-mesures-securite-entreprises" rel="noopener noreferrer" target="_blank">les orientations de la Commission d&rsquo;acc&egrave;s &agrave; l&rsquo;information (CAI).</a></p>
<p>En cas d&rsquo;incident de confidentialit&eacute;, l&rsquo;organisme doit, premi&egrave;rement, prendre des mesures raisonnables pour minimiser les risques et pr&eacute;venir la survenance de nouveaux incidents. Deuxi&egrave;mement, il doit &eacute;valuer si l&rsquo;incident pr&eacute;sente un risque de pr&eacute;judice s&eacute;rieux, notamment en appr&eacute;ciant la gravit&eacute; du risque pour toutes les personnes concern&eacute;es, et ce, en concertation avec le responsable de la protection des renseignements personnels. Troisi&egrave;mement, il doit aviser la CAI ainsi que les personnes concern&eacute;es, de m&ecirc;me que toute personne ou tout organisme susceptible de pr&eacute;venir ou de r&eacute;duire le risque de pr&eacute;judice s&eacute;rieux. Enfin il doit tenir un registre des incidents de confidentialit&eacute;.</p>
<p>Comme on peut le constater, en mati&egrave;re d&rsquo;incident de confidentialit&eacute; ou d&rsquo;atteinte &agrave; la vie priv&eacute;e, les obligations des organismes publics au Nouveau-Brunswick et au Qu&eacute;bec semblent essentiellement les m&ecirc;mes, &agrave; l&rsquo;exception des pouvoirs de l&rsquo;Ombud et de la CAI.</p>
<p>La CAI est un organisme public ind&eacute;pendant dot&eacute; <a href="https://www.cai.gouv.qc.ca/commission-acces-information/fonctions-pouvoirs-commission-acces-information" rel="noopener noreferrer" target="_blank">d&rsquo;un pouvoir de surveillance et d&rsquo;un pouvoir juridictionnel</a> &nbsp;&agrave; titre de tribunal administratif, dont le champ d&rsquo;application s&rsquo;&eacute;tant tant au secteur public qu&rsquo;au secteur priv&eacute;. Elle dispose du pouvoir de rendre des d&eacute;cisions ex&eacute;cutoires. En vertu de l&rsquo;art. 127.2 de la Loi sur l&rsquo;acc&egrave;s, lorsqu&rsquo;un incident de confidentialit&eacute; est port&eacute; &agrave; sa connaissance, elle peut ordonner &agrave; toute personne l&rsquo;application de toute mesure visant &agrave; prot&eacute;ger les droits des personnes concern&eacute;es que la loi leur reconna&icirc;t, pour la dur&eacute;e et aux conditions qu&rsquo;elle d&eacute;termine.</p>
<p>L&rsquo;ombud, quant &agrave; lui, est un agent ind&eacute;pendant de l&rsquo;Assembl&eacute;e l&eacute;gislative qui agit comme m&eacute;canisme d&rsquo;enqu&ecirc;te et de recommandation, sans pouvoir juridictionnel (<a href="https://lois.gnb.ca/fr/document/lc/R-10.6" rel="noopener noreferrer" target="_blank">art. 64.1(1) et 68(1) LDIPVP</a>). Son champ d&rsquo;application se limite principalement au secteur public.</p>
<p>Depuis la fin juillet 2025, le gouvernement du Nouveau-Brunswick a amorc&eacute; une <a href="https://www.gnb.ca/fr/gouv/mobilisation-consultation/examen-ldipvp.html" rel="noopener noreferrer" target="_blank">r&eacute;vision de la Loi sur le droit &agrave; l&rsquo;information et la protection de la vie priv&eacute;e.</a> L&rsquo;un des enjeux de cette r&eacute;forme concerne pr&eacute;cis&eacute;ment la possibilit&eacute; de doter l&rsquo;Ombud des pouvoirs d&rsquo;un organisme d&rsquo;examen ind&eacute;pendant, comme il est indiqu&eacute; &nbsp;&agrave; la page 9 du <a href="https://www.gnb.ca/content/dam/GNB3/gov/rtippa-ldipvp/docs/rtippa-travail-2025-web.pdf" rel="noopener noreferrer" target="_blank">document de travail.</a></p>
<p>&nbsp;</p>]]></content>
	<updated>2026-02-18T21:38:22+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-02-18T21:38:22+00:00</updated>
		<title>Vincent Gautrais</title></source>

	<category term="cours"/>

	<category term="mes étudiant-e-s"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-18:/280265</id>
	<link href="https://law.stanford.edu/2026/02/12/cbc-project-february-12-2026-codex-group-meeting/" rel="alternate" type="text/html"/>
	<title type="html">CBC Project – February 12, 2026 Codex Group Meeting</title>
	<summary type="html"><![CDATA[<p>Alexandre Gleria, a Brazilian corporate attorney and CodeX affiliate, presented his Corporate Behavi...</p>]]></summary>
	<content type="html"><![CDATA[<p>Alexandre Gleria, a Brazilian corporate attorney and CodeX affiliate, presented his Corporate Behavior Coding (CBC) project. The system uses behavioral data, primarily litigation records, board compensation figures, and qualified stakeholder perceptions, to rate and classify companies in ways that traditional financial statements cannot. The core insight is that financial data alone is biased and manipulation-prone, while behavioral signals like litigation patterns relative to industry peers and the ratio of executive pay to earnings reveal the human decision-making behind a business, which ultimately predicts its sustainability.</p>
<figure aria-describedby="caption-attachment-558628"><img fetchpriority="high" decoding="async" src="https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting.jpg" alt="CBC Project &ndash; February 12, 2026 Codex Group Meeting" srcset="https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting.jpg 1744w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-300x166.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-1024x566.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-768x425.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-1536x849.jpg 1536w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-1152x637.jpg 1152w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-145x80.jpg 145w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-220x122.jpg 220w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting.jpg 1744w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-300x166.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-1024x566.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-768x425.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-1536x849.jpg 1536w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-1152x637.jpg 1152w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-145x80.jpg 145w,https://law.stanford.edu/wp-content/uploads/2026/02/cbc-project-february-12-2026-codex-group-meeting-220x122.jpg 220w" sizes="(max-width: 1744px) 100vw, 1744px" referrerpolicy="no-referrer" loading="lazy"><figcaption>CBC Project</figcaption></figure>
<p><a href="https://youtu.be/f0BnePIJ3qM?si=pMk6qOtDX0D8Td2I" rel="noopener noreferrer" target="_blank">Watch video of 2.12.2026 CodeX Group Meeting with the CBC Project</a></p>
<p><strong>Transcript</strong></p>
<p><span>Roland Vogl:</span></p>
<p><span>We will now be turning it over to Alexandre Gleria, who will be updating us on his work on the CBC project, which is a project he&rsquo;s been working on as a CodeX affiliate. He&rsquo;s of course also a practicing corporate attorney in Brazil, and we&rsquo;re excited to hear where the project stands at this point.</span></p>
<p><span>Alexandre Gleria:</span></p>
<p><span>Thank you very much, Roland, and thank you for the opportunity &mdash; for being so kind to us here from so far. I&rsquo;ll try to speed up here because I have a lot of stuff to talk about today. Basically, what is the project about? We&rsquo;re creating a method and a system that could indicate the human behavior behind business decisions. These could classify companies, sectors, and also jurisdictions. We&rsquo;re finding that this could have a very broad application in terms of commercial and industrial perspective.&nbsp;</span></p>
<p><span>Based on several research papers that we are writing, this could also be a game changer in several fields, especially because these sort of map human behavior. We could avoid the bias of the data that is provided by the companies, given that all the data that companies provide comes from their own sources. Of course, we have big four companies auditing these numbers, we have accounting looking into these numbers, but to this day there&rsquo;s a lot of room for fraud, for loopholes, and so on.</span></p>
<p><span>These ratings and these classifications are also showing very promising results in terms of having anti-fraud properties, especially because we are observing that the ratings and the rankings of the company only evolve if the human behavior behind the business also evolves. This is a very cool feature that we are discovering in our research. I also have a lot of tests that I can show you of how this could be a game changer in several areas such as credit analysis, stock market analysis, and so on.</span></p>
<p><span>Basically, I have here just a cartoon of how this project started. In 2018 &mdash; much before the AI generation and products that we have today &mdash; we started a project in our firm to try to project, in a mathematical perspective, ways to measure financial impacts and probability of loss in contentious litigation. We also observed that we saw patterns in some of our clients &mdash; clients that were not aligned with our culture, clients that were very problematic. They also had a very peculiar pattern of litigation, a particular profile of litigation that this kind of company carried.</span></p>
<p><span>Based on both of these observations, from 2018 until today, we hired big four companies, interviewed C-level executives, hired mathematicians, data scientists, and economists. We have some state judges as partners of our project. One of these judges is helping a lot on the inside, and he&rsquo;s also very good with equations and so on. We have economists on our team who are also here listening to us.</span></p>
<p><span>The first starting points of the Corporate Behavior Coding &mdash; the trigger points of the project, as we know today &mdash; is that we found that the litigation data was very rich in telling us all the stuff, all the information, that is not very clear in financial information. If a company has a bad product, this ends up in litigation. If a company is saying that it treats employees very nicely but behind the scenes does a lot of wrongdoing, this could also end up in courts and so on. If companies have corporate struggles, issues, and fights among partners &mdash; all kinds of stuff &mdash; in the end, we are confident in the judiciary branch. Basically, based on these, we classified this as a very nice source for understanding behavior, for understanding corporate behavior, or the behavior that is behind business decisions, because all business decisions are made &mdash; even though we are in the generation of machines and robots &mdash; by humans in the end.</span></p>
<p><span>This is a subsequent development of this project. We started to research other sources that could also indicate behavior, such as board compensation numbers and figures, in a sense that we saw that business perpetuity is very strongly connected with how the partners and shareholders of this business are being remunerated and paid over time. We also looked at the perception of qualified stakeholders.</span></p>
<p><span>Currently, we have a lot of data being collected from the markets &mdash; from, for example, X, or the image, or the press. But what matters here, we saw, is that we need to have quality in terms of data. So we started to realize that, for example, consumers, employees, or previous employees of the company could also provide a lot of good insights about the future, about the perpetuity of the business, and other cool features of the company.</span></p>
<p><span>We have also seen, as you can take a look at in the diagrams, that these traces of human behavior are very dense in the litigation and board compensation documents and in the perception of qualified stakeholders. While we have a lot of biases and a lot of limited scope in financial statements of a company, you have a lot of conflicts of interests. Of course the company wants to show the best numbers possible. It&rsquo;s very unlikely that you could see a raw version of the corporate culture behind those numbers.</span></p>
<p><span>Based on that, we built the classification &mdash; the rationale of what kind of data we could consider as behavioral data. I will not explain this in detail because we don&rsquo;t have time. The rationale of the behavioral data is essentially what I just mentioned: avoiding bias, quality of the information, the depth of this information. Most of this data is coming from external sources, and the ratings evolve if the behavior of the business people also evolves.</span></p>
<p><span>Basically, we are in the final stage of the paper. As you know, we are also researching a lot of mind-blowing facts from the numbers and figures that we extracted from this study. We are finding that there could exist a lot of symmetries that we also find in nature, within these numbers and these classifications. That is a very cool feature as well of CBC, but we are digging on that and also researching with other specialists.</span></p>
<p><span>One thing that is also cool about this project is &mdash; why are the results so good? We started to dig a lot on that. Basically, we find that the combination of two sorts of different data provides a lot of images that the numbers alone are not telling us. In an analogy that we make here: if I assume a computer system &mdash; as you know, a computer system runs on a binary data scheme &mdash; and I assume in this scheme that I am just looking at financial data, that is the zero. I&rsquo;m not getting a lot of depth in the view of what I&rsquo;m analyzing. When we add the behavioral data, then we can see images and assumptions that could help decision-makers understand a lot of what is going on in the business. As you can see here, zero doesn&rsquo;t mean nothing, but when I add the one in the binary code, I can see vision, if you look from outside of it.</span></p>
<p><span>We are also seeing that these methods have a lot of advantages over other very recent and promising research. I will not talk about that in depth because of time. Let me just show some results in two or three minutes, and then we can open for questions.</span></p>
<p><span>Basically, we ran a lot of tests on 180 publicly traded companies in Brazil. We created some ratings from each of them. Here are some very good-rated companies from big four companies and credit agencies. As you can see, these companies all presented a very bad rating according to our methodology, like ten years before a Chapter 11 event, for example.</span></p>
<p><span>Roland Vogl:&nbsp;</span></p>
<p><span>Can you give an example of what behaviors you measure? Like, do you give a rating to companies that display certain behaviors? You mentioned fraudulent behavior, but what other specific behaviors are you tracking? Do you find they have an impact and a dependent variable that goes to the performance of the company? You&rsquo;re looking at measuring human behavior in a company and then seeing if it has an impact on stock performance, litigation, bankruptcy, or certain outcomes. What exactly are you measuring?</span></p>
<p><span>Alexandre Gleria:</span></p>
<p><span>We are measuring, for example, the amounts &mdash; combining pairs of information and measuring the asymmetry of this information. As an example, if I have a next level of litigation on a certain matter and the company presents very good earnings, this could be an indication that this business, in a small fraction of its DNA, is sustainable. Or, for example, if this company&rsquo;s litigation level is very acceptable due to market standards &mdash; but measuring the occurrences of litigations of a particular company and comparing it to how common litigation is in that particular industry. If one company has more litigation than another, that really is the variable you&rsquo;re measuring: a high amount of litigations, and then seeing if that has some predictive power over the company&rsquo;s financial performance.</span></p>
<p><span>It does, but isolating litigation data alone is worthless. You need to combine it, and you see the asymmetry between litigation, for example, and other data and the financial robustness of the business. Or, for example, if you see the directors&rsquo; payment &mdash; the salary of the board of directors &mdash; and compare it with the earnings of the company, you can also see levels of whether this business is sustainable, or whether there&rsquo;s a pattern that indicates shareholders were cashing out the company, and in two or three years this company will go through Chapter 11.</span></p>
<p><span>Roland Vogl:&nbsp;</span></p>
<p><span>Got it. Could you share the main conclusions and takeaways?</span></p>
<p><span>Alexandre Gleria:</span></p>
<p><span>The main conclusions are that our results, as I said, were very robust. We created stock portfolios just based on this methodology, and the returns &mdash; not only in Brazil but also in the U.S. &mdash; are equally robust. In 2022, we also ran a real test for a hedge fund in Brazil. We analyzed a 20-stock portfolio to see what was going on with it. We realized that one particular stock had some issues when mapping these images based on these contrasts. We identified the problem, told the fund managers what was going on, and since then the stock is down almost 90%. We have a lot of tests here that will be in our paper.</span></p>
<p><span>Just to conclude &mdash; if possible in two minutes &mdash; I want to show you the platform that we are building based on what we are creating.</span></p>
<p><span>So basically, here we can compare a lot of companies, not only in Brazil but also outside Brazil. The data is not real because, as you know, we filed the patents a few days ago, but here you can compare a lot of information on litigation, the index of the company, the litigation asymmetries, and the litigation map comparison between two companies. There&rsquo;s a lot of things that we can measure here regarding the methodology, but this is just a flavor. There&rsquo;s a lot of theory behind it.</span></p>
<p><span>Roland Vogl:</span></p>
<p><span>How can people learn more? Is the paper available? Can you share a link?</span></p>
<p><span>Alexandre Gleria:</span></p>
<p><span>The paper will probably be available within two months. We are finishing it.</span></p>
<p><span>Roland Vogl:</span></p>
<p><span>Awesome. Thank you so much, Alexandre. I think it&rsquo;s a cool idea to be able to understand legal indicators and signals &mdash; litigation, but also other things that are legally relevant &mdash; and then map that to a company&rsquo;s performance. That&rsquo;s what you&rsquo;re trying to do in essence. So what&rsquo;s your long-term goal? This started as an interest from practicing corporate law with large public companies in Brazil. You said at the beginning you saw some clients with different corporate cultures and behaviors than others. You could almost anticipate that one company would do better than another, and there is a connection between the culture and the way people conduct themselves in a business, and the outcome. That was your thesis, and that was the research project. But there&rsquo;s also a patent now and a company that will presumably try to make those insights available, because it is potentially tradable information. Can you talk a little bit about how you see the future unfolding for this project?</span></p>
<p><span>Alexandre Gleria:</span></p>
<p><span>Well, basically, we realized that a company with a specific company profile and culture &mdash; they don&rsquo;t leak targets, they avoid litigation, they have less litigation outstanding in comparison with peers, the quality of their products or services or intangibles are superior in a way that they don&rsquo;t have a lot of claims in a general sense &mdash; not only in litigation but also in behavioral data, such as opinions issued by qualified stakeholders like a consumer, an employee, or an employer.</span></p>
<p><span>Based on that, we saw that, for example, if we compare and translate this into real numbers &mdash; the amount of litigation in terms of volume, the amount of litigation in U.S. dollars &mdash; and compare it with the economic capacity of the company, this gives you an idea of how sustainable the company is. The idea here is to scale that, because there are a lot of people working on unstructured data arrangements and platforms in order to use, for example, transformers to process a lot of material from companies that have a lot of information. But this approach, in our opinion, is not the best one. First, because it consumes a lot of compute resources, and the quality of the raw material and inputs used is not good either.</span></p>
<p><span>The idea is that with the right inputs &mdash; as in the behavioral data that I just gave examples of &mdash; we can scale this to provide, globally, more reliable corporate ratings and corporate classifications in all jurisdictions, regardless of whether that jurisdiction is common law, civil law, or whatever. Especially because we are running tests not only in Brazil but also in the U.S., the U.K., and other jurisdictions. The key point is that once this data is structured, we can scale very fast. Of course, this permits analysis of public companies more easily, but it could also be used by private health companies.</span></p>
<p><span>Roland Vogl:</span></p>
<p><span>Do you have a name for the paper already?</span></p>
<p><span>Alexandre Gleria:</span></p>
<p><span>The name &mdash; we are thinking about it. CBC was a very corporate name that we chose. The original informal name, the nickname of the project in the first months, was the Corporate Genomics Project. That&rsquo;s why we named it CBC &mdash; because we built a whole theory on that and are just consolidating it to give an official name to the paper. In the paper, we have a lot of mathematicians involved because &mdash; I&rsquo;m an attorney, even though I&rsquo;m a specialist in tax law and corporate law &mdash; we need to have qualified people in this paper. We have a very strong background on math and financial markets in order to prove our theory.</span></p>
<p><span>To conclude on the tests: when we compare these ratings with rating agencies or any other kind of data available in financial markets, for some specifics these prove to be much more reliable than anything else. We also prepared slides showing that in the last 50 years we have a lot of innovation, for example in cancer diagnosis and medicines in general terms. Yet when you look through the corporate outlook over those fifty years, we still have fraud scandals, accounting mistakes, and so on. Basically, the idea here is to provide not only a new product and idea to the market, but also a culture of corporate diagnostics.</span></p>
<p><span>Roland Vogl:</span></p>
<p><span>That&rsquo;s an exciting future. I think you could apply it to the corporate world, the crypto world, and other areas too. Thank you again, Alexandre. Thanks for joining us.</span></p>]]></content>
	<updated>2026-02-12T18:54:15+00:00</updated>
	<author><name>CodeX</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-12T18:54:15+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="codex"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-18:/280266</id>
	<link href="https://law.stanford.edu/2026/02/12/swiftlaw-february-12-2026-codex-group-meeting/" rel="alternate" type="text/html"/>
	<title type="html">SwiftLaw – February 12, 2026 Codex Group Meeting</title>
	<summary type="html"><![CDATA[<p>SwiftLaw, founded by Saketh Kesiraju, is a vertical AI platform that automates fund formation for em...</p>]]></summary>
	<content type="html"><![CDATA[<p>SwiftLaw, founded by Saketh Kesiraju, is a vertical AI platform that automates fund formation for emerging fund managers and their attorneys. The platform streamlines the creation of three core fund documents:</p>
<ul>
<li><strong>Limited Partnership Agreement (LPA)</strong></li>
<li><strong>Private Placement Memorandum (PPM)</strong></li>
<li><strong>Subscription Document</strong></li>
</ul>
<p>It does this by generating a client questionnaire from a term sheet, then using the responses to auto-draft a complete document set in minutes rather than the months it traditionally takes. It also includes a native DocX editor, a co-pilot feature that cross-references documents against whitelisted legal sources like the ILPA guidelines, and visualization tools for key fund terms and entity structures. SwiftLaw operates both as a direct full-stack fund formation service and as a platform licensed to law firms, with the system running locally to ensure data privacy.</p>
<figure aria-describedby="caption-attachment-558622"><img decoding="async" src="https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2.jpg" alt="SwiftLaw - February 12, 2026 Codex Group Meeting 1" srcset="https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2.jpg 1519w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-300x140.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-1024x479.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-768x359.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-1152x538.jpg 1152w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-171x80.jpg 171w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-220x103.jpg 220w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2.jpg 1519w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-300x140.jpg 300w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-1024x479.jpg 1024w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-768x359.jpg 768w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-1152x538.jpg 1152w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-171x80.jpg 171w,https://law.stanford.edu/wp-content/uploads/2026/02/swiftlaw-february-12-2026-codex-group-meeting-2-220x103.jpg 220w" sizes="(max-width: 1519px) 100vw, 1519px" referrerpolicy="no-referrer" loading="lazy"><figcaption><a href="https://www.tryswiftlaw.com/" rel="noopener noreferrer" target="_blank">SwiftLaw</a></figcaption></figure>
<p><a href="https://youtu.be/nJmYozbtaa0" target="_blank" rel="noopener noreferrer">Watch 2.12.26 CodeX Group Meeting with SwiftLaw</a></p>
<p><strong>Transcript</strong></p>
<p><span>Roland Vogl:</span></p>
<p><span>We have Saketh Kesiraju, who&rsquo;s the CEO and founder of SwiftLaw. So excited to learn from you.&nbsp;</span></p>
<p><span>Just a quick announcement&mdash;we just announced the CodeX FutureLaw conference. The main event is April 16, 2026 but we have other exciting events including a hackathon,a bootcamp, and a UN AI for Good law track conference that entire week. We call it CodeX FutureLaw Week and I encourage you to check out the program and hopefully join us in April.&nbsp;</span></p>
<p><span><a href="https://codexfuturelaw.com/" rel="noopener noreferrer" target="_blank">codexfuturelaw.com</a>&mdash;you can find all the information.&nbsp;</span></p>
<p><span>&nbsp;I will turn it over to Saketh. We&rsquo;ve learned about SwiftLaw before and are curious to hear where the journey took you, Saketh. Over to you.</span></p>
<p><span>Saketh Kesiraju:</span></p>
<p><span>Thank you so much, everyone. It&rsquo;s really an honor to be back here at CodeX. I think it was last year or maybe two years ago. A lot has happened since.</span></p>
<p><span>I&rsquo;m founder of SwiftLaw. SwiftLaw is a vertical AI platform for fund formation. We just want to help people launch funds really quickly. That&rsquo;s the premise. A little bit about me&mdash;first off, I didn&rsquo;t necessarily get into funds or legal tech on purpose, I would say.&nbsp;</span></p>
<p><span>Back in 2023, I was very much into the crypto space and I was really trying to find people to buy the various crypto real world assets that I was trying to put on chain. I went to a conference in Salt Lake City with a bunch of emerging managers. I was trying to pitch these different emerging managers to sort of buy my crypto projects, and really no one was buying it. No one really cared. But while I was there, I befriended dozens of emerging managers. A lot of them were real estate professionals or HP executives and really these Bay Area Indian uncle types.&nbsp;</span></p>
<p><span>A lot of them really wanted to start these small venture funds, small real estate funds, and so on. While I was talking to them, you know, they were really excited about me because I was just an ambitious kid who took a gap from school to come there and work on something full time. They took me under their wing, and I basically worked with these fund managers building various tools for them. I built chatbots. I built a CRM automation tool.&nbsp;</span></p>
<p><span>I think the biggest headache that I saw was that their lawyers were just not very responsive to them, and the entire fund formation process was just something that was super expensive and tedious for them to just execute on. I decided to sort of take a swing at that. In that process and in learning more about fund formation, specifically for an emerging manager&mdash;someone that&rsquo;s trying to raise, let&rsquo;s say, under $150 million&mdash;large law firms, your AMLAW firms, are not necessarily geared towards these emerging managers.&nbsp;</span></p>
<p><span>Usually, it&rsquo;s a very high-margin service for the law firm. Usually, it&rsquo;s a very relationship-heavy sort of service that they provide. For an emerging manager that just wants to get their fund created and get them out the door so they can start raising capital, it&rsquo;s sort of a little bit of an inefficient process for them&mdash;actually highly inefficient. Oftentimes, some managers would wait six months to get their fund created. It&rsquo;s really sad if a fund manager&rsquo;s asset management career ends before it really starts. And that was something that I&rsquo;d seen constantly amongst these emerging managers, and so I sort of decided to do something about it.</span></p>
<p><span>In understanding why I chose this emerging manager sort of group to focus on, I really saw that even amongst my own peer group, there&rsquo;s tons of people starting funds. In fact, Roland was someone that I was speaking to who started a fund recently. There are other people in my network&mdash;they&rsquo;re starting funds. It&rsquo;s really this long tail that&rsquo;s emerging where even at fund launch, I met numerous fund managers that were doing really niche things or had new strategies.&nbsp;</span></p>
<p><span>For instance, one was starting a self-storage strategy fund, another one was starting a Pok&eacute;mon card investment sort of strategy fund. There are all these various strategies that a fund structure can support because the fundamental sort of insight that I had was that a fund is simply just a vehicle to pool capital and do something productive with it. If you have a strategy or if you have some sort of edge, then you could potentially go out into the market, raise capital, and then scale whatever that strategy is.</span></p>
<p><span>In understanding what actually fund formation is, fundamentally it&rsquo;s three documents.&nbsp;</span></p>
<p><span>It&rsquo;s your LPA documents, a limited partnership agreement. It&rsquo;s your PPM, which usually outlines your risk factors for your fund strategy, and your subscription document, which is how investors essentially onboard onto your fund. That&rsquo;s how you get investors to come onto your fund or to invest. Usually, these documents are 160-plus pages long. They&rsquo;re pretty long documents that previously, pre-LLMs, law firms would use something like Contract Express or try to use various tools, but it just wouldn&rsquo;t work because these documents were far too long.&nbsp;</span></p>
<p><span>Law firms would actually have their entire fund&rsquo;s practice be manual. So you&rsquo;d have associates just write these 60-page documents or use templates and essentially do the full drafting process by hand. And what I realized is that really, looking at numerous fund documents over the past years, funds really break down into a small set of recurring terms. As you can see here, I said fees, carry, jurisdiction, fund size, entity structure, and so on. Really, what the work in practice is, is customizing these terms within those templates to create a full set of fund documents. That is essentially what our platform does&mdash;it&rsquo;s a vertical sort of AI platform, a workspace for fund formation in which you can create a term sheet, assign that term sheet to a client so a client can fill it out, and then use that term sheet to then generate your LPA and subscription documents, so your larger fund document set. From there, you can use that into a chatbot that&rsquo;s enriched with more research so that you can have a sort of chat interactive session with your documents. And there are sort of more tools that I&rsquo;ll show off right now, actually, instead of describing it.</span></p>
<p><span>Right here is the current fund formation workspace. Rght here, I&rsquo;ll go to a new deal and I&rsquo;ll create a new client. Right, also, you know, talk about what is&hellip; on one&hellip; and I&rsquo;ll create an empty client workspace. So now that I&rsquo;ve created a workspace for myself, I&rsquo;ll go into my documents tab&mdash;all the documents I&rsquo;ve been working on recently. And I&rsquo;ll click into a term sheet that I want to use. This is the term sheet. One thing to note here is that we&rsquo;ve built a native docs editor into this workspace. That&rsquo;s something that&rsquo;s usually extremely hard. DocX is the formatting sort of underpinning for Microsoft Word. Any lawyer that drafts documents or uses documents needs a DocX native editor to fundamentally do their work. We spent a lot of the last 18 months really perfecting the DocX editor and making sure that documents can get imported into it, retain formatting, get exported out, and retain the formatting. Sounds simple&mdash;quite hard to do in practice.</span></p>
<p><span>Right here, you can see a term sheet. It has all these placeholders or blanks in it. What I&rsquo;ll do from here is actually generate questions based off of those blanks so that I can create a questionnaire and assign it to a client. Right here, I&rsquo;ll say a questionnaire and generate this. Over in the questionnaire tab, this is one that I&rsquo;ve worked on before, and for the sake of time, I&rsquo;ll just show you what that looks like. So here, the questionnaire is created. I can hit share, and I get a link. This is the interface that the client will see&mdash;it&rsquo;ll be like a form view where I can answer the questions one by one. Yeah, 80 percent, etc., and when I&rsquo;m done, I just hit submit.&nbsp;</span></p>
<p><span>Now my job as the client is over, so I can go back to the dashboard for the attorney, see that the answers from the client are all in the right places. If there&rsquo;s something that I need to change, I can do that, and if everything looks good, I can hit submit and generate. Nw it will generate a complete term sheet. Well, it should&hellip; it should be a complete term sheet. I don&rsquo;t know what&rsquo;s going on. This is the humor with these live demos. But let me just show you what the term sheet it should generate looks like.</span></p>
<p><span>Okay, so this is sort of a term sheet that would get generated from it with all the insertions in the right place. From here, with this term sheet that&rsquo;s been generated, I can go back to my workspace and say, you know, create now an LPA and a subscription document based off of this term sheet. So let&rsquo;s say this one looks pretty good to me, and so I hit generate. Now it&rsquo;s essentially extracting all those terms and creating an entire LPA and subscription document. And the way it&rsquo;s doing that is we have sort of a set of LPAs that we have in the back end that we use as sort of golden reference documents. It essentially uses that as well as the terms in the term sheet to create this full document set for the LPA and the subscription document.</span></p>
<p><span>So we&rsquo;ll give it maybe a few seconds. While that&rsquo;s going, I can just show you what it should look like, which is the document that&rsquo;s generated will look something like this. It&rsquo;ll be a full LPA, usually 60 pages, with&mdash;this is an insertion that was made. I just ran this entire sort of flow right before the call. I didn&rsquo;t really anticipate that it would start working on the call, but I guess you&rsquo;ve got to anticipate that. Well yeah, this is sort of a full 60-page LPA that was generated. You can see that all these insertions are made. Everything looks pretty good. I&rsquo;ll show you the subscription document as well.&nbsp;</span></p>
<p><span>It&rsquo;s essentially from term sheet to complete document set within a matter of minutes. And usually, if you&rsquo;re a private funds group, the time it takes to go from a term sheet or not even having a clear idea of what the terms of a fund are to a complete document set is months. It can be. In terms of drafting time, it&rsquo;s like 20-30 hours. And if you&rsquo;re getting billed at a partner rate, that&rsquo;s pretty expensive. And so fundamentally, we&rsquo;ve abstracted and compressed the timelines to be far, far shorter than what it is today.</span></p>
<p><span>Now that these fund documents are created, I&rsquo;ll just add them to my workspace here. And I can go to the Co-Pilot tab here, add it as context&mdash;one of these documents, let&rsquo;s say the term sheet&mdash;add some sources. Let&rsquo;s say the market standards, SEC, the web, some state codes. I&rsquo;ll say something like, &ldquo;Hey, are there glaring errors or issues in my term sheet I should watch out for?&rdquo; And now what it&rsquo;s doing is it&rsquo;s essentially making web searches to all these various whitelisted sources that I&rsquo;ve previously bookmarked, and it&rsquo;s finding the relevant sources and then using our document as context to provide a sort of real informed sort of feedback for the document. And this is a back-and-forth conversation that you can continue to have. Aas you can see, there are really specific issues that it&rsquo;s highlighting. It provides some citations for me as well. I can click on one of these&mdash;this is a pretty in-depth LPA sort of guide that is what a professional fund attorney would use as a reference document. These are the recommendations. I can sort of keep going, or another sort of visualization tool or tool that&hellip; well yeah, maybe, maybe folks have some questions, you know, anything specific you still want it to show, or otherwise, I&rsquo;m sure&hellip;</span></p>
<p><span>Yeah, just&hellip; yeah, maybe this is just a visualization tab here, which is the fund that we&rsquo;ve just created. These key terms are the main sort of backbone of a fund, and so you can visualize it right here. And if I wanted to change one of the key terms, let&rsquo;s say from 80 to 50 million, it updates those things across the document. So this is just a really easy way to change terms in your fund without having to sort of go into the document, search for it, and make drafting-related changes. You could sort of just use this interface as a shortcut.</span></p>
<p><span>Another visualization we have here is an entity structure visualization. I find it pretty helpful considering that funds are complex structures, and it&rsquo;s hard to understand what goes into what and how carry works or how various parts of funds work. And so this is sort of just a visualization to help fund attorneys understand exactly what those documents actually mean in practice. But that&rsquo;s the sort of full workflow. It&rsquo;s really an end-to-end vertical platform for funds. And that&rsquo;s it.</span></p>
<p><span>Roland Vogl:</span></p>
<p><span>That&rsquo;s pretty awesome. So does this help practicing fund formation attorneys with their work just to be more efficient, or is this something that the fund managers will use themselves instead of using an attorney?</span></p>
<p><span>Saketh Kesiraju:</span></p>
<p><span>So it&rsquo;s really for the attorney. But how we&hellip; so we sort of operate as a full-stack sort of fund formation business as well as we sell this platform to other law firms as well. So if we have a fund manager that&rsquo;s in our network and wants to come to us, then we will essentially be sort of the full-stack solution for that manager. But that&rsquo;s where really our vision is. However, for now, we&rsquo;ve been working with sort of pilot law firms to just make our workflows a lot stronger and to just validate them in the real world as well.</span></p>
<p><span>Roland vogl:</span></p>
<p><span>So you&rsquo;re seeding the system with whatever the templates are from the law firm, and those templates, they have the different variations of specific clauses, right? Then from those templates, your system can create a questionnaire that will then help you sort of create the custom documents. Correct?</span></p>
<p><span>Saketh Kesiraju:</span></p>
<p><span>Correct, yeah. I think that&rsquo;s most helpful for the term sheet, right, because that&rsquo;s probably the easiest one-to-one placeholder replacement workflow. But for your larger LPA and subscription documents, it&rsquo;s really, you know, there are clause-specific replacements that need to be done. It&rsquo;s not just term-for-term replacements. And so that&rsquo;s where it runs sort of a longer process to do this sort of document creation workflow, which I skipped over earlier just to save some time. But yeah, that is sort of a more robust workflow.</span></p>
<p><span>Roland Vogl:</span></p>
<p><span>That&rsquo;s a cool feature to be able to run those documents by some external benchmarks, for example. How do you know what kind of data are you using to enable that feature?</span></p>
<p><span>Saketh Kesiraju:</span></p>
<p><span>Yeah, so market terms are really from whitelisted sources that I&rsquo;ve run into or fund attorneys have told me that they use as reference points. So the LPA guideline for the Institutional Limited Partners Association&mdash;they essentially set standards. They put out templates, they put out various sort of private market data and information regarding private funds. And so that&rsquo;s the number one source that we go to. And then there&rsquo;s also just web search. So we also scrape the entire web, essentially, for private funds or market-related terms. And then we try to figure out if that&rsquo;s a credible source and then whitelist it and see if&hellip; yeah, essentially that is in line with what we hear from our attorneys as well.</span></p>
<p><span>Yeah. So in terms of privacy, there&rsquo;s no reinforcement learning happening to ingest the content. In fact, the entire system runs locally. So you can think of it as like Claude Code fine-tuned specifically for&hellip; or like Claude for Work, something that you might have seen, fine-tuned specifically for private funds, in that the entire system is just local, so it&rsquo;s using your local files and so on.</span></p>
<p><span>Roland Vogl:</span></p>
<p><span>So Cristiana and Juan are asking, can you get capital from other jurisdictions for entities outside the US?</span></p>
<p><span>Saketh Kesiraju:</span></p>
<p><span>Yeah, you can. In fact, one firm that we were working with was actually a Canadian GPU stakes firm. And for them, their limited partnership agreement had a special feeder vehicle as well that was necessary for them to essentially raise capital from outside US investors. And so yeah, the actual structuring of these fund vehicles is obviously dependent on an actual attorney advising you on this. But yeah, that is all doable on the platform.</span></p>
<p><span>Roland Vogl:</span></p>
<p><span>Thank you so much for the updates. It&rsquo;s exciting. But I know you have to hop to your next event already, so really appreciate you joining us. It&rsquo;s very cool. And yeah, good luck with all the next steps.</span></p>
<p><span>Saketh Kesiraju:</span></p>
<p><span>Thank you. It was a real pleasure to be here and speak to you all. So thank you for having me, and have a great day today.</span></p>
<p><span>Saketh Kesiraju can be reached at saketh@tryswiftlaw.com</span></p>]]></content>
	<updated>2026-02-12T18:26:48+00:00</updated>
	<author><name>CodeX</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-12T18:26:48+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="codex"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-18:/280210</id>
	<link href="https://law.stanford.edu/2026/02/17/the-cardboard-cockpit-and-the-apprentices-broom/" rel="alternate" type="text/html"/>
	<title type="html">The Cardboard Cockpit and the Apprentice’s Broom</title>
	<summary type="html"><![CDATA[<p>If you build a machine that becomes more useful the more you trust it, but more dangerous in precise...</p>]]></summary>
	<content type="html"><![CDATA[<p>If you build a machine that becomes more useful the more you trust it, but more dangerous in precisely equal measure, how much trust should you extend? Perhaps your answer is to trust it where it performs well, withhold trust where it does not, and calibrate as you go. That seems rational, but the answer assumes you can tell the difference. The Sorcerer&rsquo;s Apprentice could not. Remember? Mickey Mouse deployed a capability that worked perfectly, carrying water exactly as instructed, and the very success of the spell is what made the flood inevitable. The broom was not broken. Mickey lacked the mastery to govern what he had set in motion.</p>
<p>AI-powered chatbots present the same structure. In 2023, the National Eating Disorders Association replaced its human helpline staff with an AI chatbot called Tessa. The interface was polished. The responses arrived in fluid, reassuring prose. Within days, Tessa was dispensing weight-loss tips to people suffering from eating disorders. The organization shut her down. But the damage had already begun compounding. Tessa was not malfunctioning. She was functioning, bereft of constraint.</p>
<p>Absent adequate testing and oversight infrastructure, the rush to deploy AI-powered chatbots produces systems structurally incapable of meeting the obligations their interfaces promise. Some of those promises are explicit. Most are implicit, conveyed by the conversational fluency itself. This carries costs as well as benefits. Slowing deployment delays improvements and (potentially) degrades competitive edge. But the alternative is indefensible.</p>
<p>Rushed chatbot deployment presents two distinct problems. The first is an appearance problem. Chatbots that have not been adequately tested look indistinguishable from chatbots that have. Conversational fluency mimics competence so convincingly that users extend trust the system has not earned. This is the cardboard cockpit. Every dial, every switch, every instrument panel is in place. It looks like the real thing, but that&rsquo;s all. It does nothing. The second is a scaling problem. Chatbots that&nbsp;<em>do</em>&nbsp;perform well become more dangerous as adoption grows, because each increment of user trust widens the blast radius of any failure the system has not been tested against. And that is the apprentice&rsquo;s broom. It works and that is precisely what makes it dangerous. Both problems trace to the same root cause, which is deployment that outpaces evaluation.</p>
<h4>The Analytical Prism</h4>
<p>I want to examine this chatbot phenomenon through my AI Life Cycle Core Principles (AILCCP) framework. The AILCCP provides 37 principles for AI system assessment, mapped to life cycle phases, controls, and standards from NIST, ISO, and IEEE. It applies regardless of system architecture. In this context, it can help organizations build trustworthy chatbots, from informed consent to safety guardrails to demographic monitoring. Now, to be clear, this is just a sampling of what the AILCCP can be used for and I&rsquo;m intentionally keeping it bounded.</p>
<p>The AILCCP framework defines principles such as Safety, Transparency, and Fairness with precise scope, specific controls, and designated life cycle activation points, and yes, the capitalization is intentional. These capitalized terms carry the full weight of their framework definitions. So, when this note uses the same words in <em>lowercase</em>, such as &ldquo;safety&rdquo; or &ldquo;fairness,&rdquo; that is also intentional and refers to the ordinary English definition. Treat capitalization as a signal that the framework&rsquo;s specific machinery is being invoked.</p>
<h4>The Fluency Trap</h4>
<p>The AILCCP framework identifies Truth as a distinct, separate principle from Accuracy. This is because an AI system can be accurate in aggregate while generating specific outputs that are false. Chatbots present a uniquely dangerous variant of this problem. Their outputs arrive in grammatically perfect, contextually appropriate prose. The very fluency that makes them useful also makes their failures invisible. This is the appearance problem at its sharpest.</p>
<p>The Fidelity principle addresses precisely this kind of invisible failure. Fidelity requires that system outputs remain aligned with stated purpose and training objectives. A chatbot deployed for medical triage that generates plausible but incorrect diagnoses fails Fidelity in the most dangerous way possible. The output evades casual inspection. It looks right. It sounds right. It is wrong. In a triage context, people get sick or die.</p>
<p>Rushed deployment exacerbates this problem because it compresses the testing window where Fidelity failures would otherwise surface. The AILCCP framework designates a distinct life cycle phase, Evaluation &amp; Red Teaming, in which teams probe a system&rsquo;s performance, safety, robustness, and fairness before release. The phase exists because standard validation catches <em>expected</em> failures while adversarial testing catches <em>unexpected</em> ones. A red team simulating a distressed user who phrases symptoms ambiguously might discover that the chatbot defaults to cheerful reassurance rather than appropriate caution. That discovery, made before deployment, is an engineering insight. Made after deployment, it is an incident report. Truncate the phase, and those failure modes find users instead of testers.</p>
<h4>The Consent Illusion</h4>
<p>The AILCCP framework&rsquo;s Consent principle requires something more demanding than a clicked checkbox. It requires that consent interfaces mandate active acknowledgment of operational realities material to informed choice. This includes whether the system generates outputs probabilistically rather than retrieving from fixed sources, and whether outputs may contain confidently presented false information.</p>
<p>My sense is that most deployed chatbots fail this standard. They present conversational interfaces that, by their very design, suggest a competence and reliability the underlying system may not possess. Users interacting with a fluent chatbot form mental models based on human conversation, where fluency generally correlates with knowledge. The AILCCP framework recognizes this gap explicitly. Consent obtained from a user whose understanding of system operation is categorically incorrect does not satisfy the principle even where disclosure was formally complete.</p>
<h4>The Accountability Vacuum</h4>
<p>When a rushed chatbot produces harmful output, accountability becomes diffuse in ways the AILCCP framework anticipates. The Accountability principle warns that deficiency arises where ownership is diffuse, evidence is not preserved, and redress is undefined. Rushed deployment typically means incomplete logging, absent audit trails, and unclear lines of responsibility between developer and deployer.</p>
<p>The FTC has demonstrated willingness to act in this space. Its enforcement action against DoNotPay targeted deceptive claims about an AI system&rsquo;s legal capabilities. Evolv Technologies faced scrutiny for misleading marketing of AI security screening. These actions share a common thread. Organizations deployed AI systems whose marketed capabilities exceeded their tested performance. The gap between promise and reality was the product of insufficient evaluation, not insufficient technology.</p>
<h4>The Dialectical Reality</h4>
<p>Slower deployment means delayed access. AI chatbots can and do provide genuine value, particularly for populations underserved by existing systems. A well-designed medical chatbot could extend the reach of overburdened health systems. A well-designed legal chatbot could democratize access to legal information.</p>
<p>But the qualifier matters. &ldquo;<strong>Well-designed</strong>&rdquo; means tested. It means red-teamed. It means constrained by principles like Reliability, which requires that continuous validation ensures alignment between marketing claims and actual performance. It means constrained by Safety, which requires real-time monitoring and the ability to return to safe operation states. It means constrained by Transparency, which requires that marketing claims match terms and capabilities, preventing deceptive gaps between what a system promises and what it delivers.</p>
<p>None of these requirements are exotic. They are the ordinary discipline of building systems that work as advertised. The AILCCP framework maps them to specific life cycle phases, specific controls, and specific evidence artifacts. The infrastructure exists. The question is whether organizations will invest in it before deployment rather than after harm.</p>
<h4>Conclusion</h4>
<p>The cardboard cockpit fails not because it looks wrong, but because it looks right. The apprentice&rsquo;s broom fails not because it stops working, but because it never stops. These are not the same problem and conflating them leads to incomplete solutions. A system that merely appears competent needs better testing. A system that genuinely performs well but scales beyond its guardrails needs better Governance. Addressing only the appearance problem leaves the scaling problem untouched, and vice versa.</p>
<p>Organizations deploying AI chatbots should treat the AILCCP framework&rsquo;s phases not as bureaucratic obstacles but as engineering necessities. Evaluation &amp; Red Teaming exists because it surfaces failures before users encounter them. Pre-Deployment Review exists because review gates prevent premature release. Operations &amp; Monitoring exists because a system that passed every test at launch can still drift into harm at scale. These phases take time. That time is not wasted. It is the difference between a cockpit and its cardboard replica, between a sorcerer and his apprentice.</p>
<p>Ricky Bobby&rsquo;s father in&nbsp;<em>Talladega Nights</em>&nbsp;offered advice that drove an entire career of reckless behavior. &ldquo;If you&rsquo;re not first, you&rsquo;re last.&rdquo; (He later admitted he was drunk when he said it and that it made no sense.) The AI industry&rsquo;s version of this &ldquo;wisdom,&rdquo; that speed to market is the only competitive variable worth measuring, deserves similar correction. Being first means nothing if the product harms the people it purports to serve.</p>
<p></p>]]></content>
	<updated>2026-02-18T00:02:38+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-18T00:02:38+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="artificial intelligence"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-17:/280161</id>
	<link href="https://law.stanford.edu/2026/02/17/the-evolving-role-of-the-clo-in-an-era-of-climate-and-sustainability-accountability-and-risk-management/" rel="alternate" type="text/html"/>
	<title type="html">The evolving role of the CLO in an era of climate and sustainability accountability and risk management </title>
	<summary type="html"><![CDATA[<p>For legal advisors to global firms, recent developments in climate policy have created a landscape c...</p>]]></summary>
	<content type="html"><![CDATA[<p><span>For legal advisors to global firms, recent developments in climate policy have created a landscape characterized by significant volatility. The past year alone has seen shifting political priorities, complex litigation, and evolving implementation timelines that require continuous monitoring.</span></p>
<p><span>However, for a General Counsel or Chief Legal Officer (CLO) to believe this surface-level volatility is the new normal would be a strategic error. Beneath the noise, a decisive legal signal has emerged: the center of gravity for climate governance is shifting rapidly from aspirational narratives to auditable, defensible data and the legal and executive teams need to adjust accordingly.</span></p>
<p><span>For years, the climate disclosure portfolio was largely the domain of the Chief Sustainability Officer (CSO), and reporting often came in the form of glossy reports designed for stakeholders and consumers. That era is drawing to a close. As an increasing number of disclosure related legislative and regulatory proposals are becoming enforceable laws, the ownership of climate emissions data is migrating to the legal department. It is no longer solely a matter of corporate social responsibility; it is a matter of governance, risk, and compliance (GRC). For corporate leadership, the question is no longer what a company wants to say about its climate and sustainability ambitions, but whether it can provide data to demonstrate its actions and whether that proof can withstand the scrutiny of a regulator, a litigator, or a customs official.</span></p>
<h3><b>Global Compliance Regimes Emerge</b></h3>
<p><span>As the regulatory landscape hardens, three distinct categories of regulation illustrate the definitive shift toward mandatory, data-centric reporting:</span></p>
<p><b>1. Corporate Disclosure: Quantitative Rigor and Global Baselines</b></p>
<p><span>Across jurisdictions, the focus is shifting from voluntary ESG reporting to mandatory, auditable disclosure regimes that function like financial reporting.</span></p>
<ul>
<li>
<ul>
<li><b>California (SB 253):</b><span> While climate-risk reporting under SB 261 remains subject to ongoing litigation, the quantitative reporting regime of SB 253 is proceeding apace. Large companies ($1B total revenue) doing business in California must disclose Scope 1 and 2 emissions starting in 2026, with Scope 3 following in 2027. The California Air Resources Board (CARB) is establishing a program where quantifiable facts are the only defensible basis for compliance.</span></li>
<li><b>Europe (CSRD):</b><span> The Corporate Sustainability Reporting Directive requires assured, comparable sustainability data. Even before non-EU parents file their first reports for financial years starting on or after January 1, 2028, their large EU subsidiaries must report in 2026 for fiscal year 2025 and customers already in scope may already be&nbsp; demanding data to meet their own compliance requirements.</span></li>
<li><b>Global Baseline (ISSB):</b><span> The International Sustainability Standards Board (IFRS S1 &amp; S2) is establishing a durable, jurisdiction-agnostic framework for disclosure. Major economies &ndash; including the UK, Australia, and Brazil &ndash; are moving toward ISSB-aligned mandatory reporting, meaning that a standardized, &ldquo;financial-grade&rdquo; emissions inventory is becoming a prerequisite for global market access.</span></li>
</ul>
</li>
</ul>
<p><b>2. Trade and Border Controls: Reporting plus Environmental Levies</b></p>
<p><span>Key imported products are now the subject of mandatory disclosures and accompanying environmental levies in the EU, with a number of other countries considering similar regimes.&nbsp;</span></p>
<ul>
<li><b>The EU&rsquo;s Carbon Border Adjustment Mechanism (CBAM):</b><span> Importers of industrial goods (cement, steel, aluminum, etc.) into the EU must quantify embedded emissions and reconcile them with EU-ETS-priced certificates. This effectively shifts emissions accounting from the sustainability office to border operations and customs compliance, where data gaps can result in goods being held at the port of entry. The UK has a similar regime that will become effective in 2027.</span></li>
</ul>
<p><b>3. Supply Chain and Circularity: The Liability of Provenance</b></p>
<p><span>Beyond emissions, regulators are targeting the social and physical lifecycle of products, converting voluntary &ldquo;responsible sourcing&rdquo; into mandatory legal liability.</span></p>
<ul>
<li><b>CSDDD &amp; National Standards:</b><span> Beginning in 2028, the EU&rsquo;s largest companies and non-EU companies that meet net turnover thresholds will be subject to The EU&rsquo;s Corporate Sustainability Due Diligence Directive (CSDDD). The CSDDD expands upon national statutes like France&rsquo;s Duty of Vigilance and Germany&rsquo;s Supply Chain Due Diligence Act. While not uniform these laws require a range of corporate action including transition plan adoption, due diligence processes with suppliers, and disclosure of adverse impacts and actions taken.&nbsp; Some of these laws establish direct civil liability for parent companies, holding them accountable for damages resulting from a failure to prevent human rights and environmental abuses within their global value chains.&nbsp;</span></li>
<li><b>Extended Producer Responsibility (EPR):</b><span> Circular economy laws are converting physical waste into digital data obligations. Producers must now track data on product material composition, recyclability, and waste streams to calculate fees relating to their environmental impact, and penalizing companies for failing to track and produce data.</span></li>
</ul>
<h3><b>The Fiduciary Imperative: A Defensible Legal Architecture</b></h3>
<p><span>For the in-house counsel, a &ldquo;wait and see&rdquo; approach to climate disclosure is imprudent given the time required to build necessary infrastructure to respond to current and future regulatory mandates. The CLO should understand their core responsibility for establishing internal controls over climate reporting, and moving the organization away from ad-hoc processes toward regulatory-grade record keeping. The legal leadership team must drive the implementation of systems that track data provenance &ndash; who entered it, when, and why &ndash; to prepare for the inevitable arrival of &ldquo;limited&rdquo; and eventually &ldquo;reasonable&rdquo; assurance audits.</span></p>
<p><span>This is ultimately a matter of fiduciary duty. To navigate this, counsel must distinguish between the rigid standards required for audited quantitative data and the cautionary language required for narrative content, ensuring the former validates the latter. By treating emissions and sustainability reporting with the same consequence as financial reporting including ensuring necessary controls, assurance, and contract-ready evidence, legal departments not only ensure compliance but also build a necessary shield against future legal and political challenges.</span></p>
<h3><b>5 Things CLOs Should Be Doing to Ensure Compliance and Manage Risk</b></h3>
<p><span>For the CLO, the evolution of mandatory reporting requires a pivot from reactive oversight to a proactive orchestration of compliance architectures that manage exposure to long-term liabilities. Undertaking the following five strategic actions will create a strong foundation for successful compliance and risk management.&nbsp;</span></p>
<ol>
<li><b> Mandate the Implementation of a Defensible System of Record.</b><span> The CLO must treat emissions data not as an operational metric, but as a material class of record subject to Internal Controls over Sustainability Reporting (ICSR). This requires establishing a rigorous data internal audit trail that tracks the provenance of every data point including who entered it, when it was changed, and why. By enforcing version control and audit trails similar to those used in financial reporting, the legal department can create an evidentiary record capable of defending the corporation against both regulatory, legal and reputational challenges.&nbsp;</span></li>
<li><b> Transition from &ldquo;Firewalling&rdquo; to Strategic Alignment.</b><span> Legal counsel must pivot from the traditional strategy of strictly separating historical facts from forward-looking plans to a new standard of rigorous consistency. Emerging regulations and anti-greenwashing case law now penalize the gap between a company&rsquo;s &ldquo;hard numbers&rdquo; (audited emissions) and its &ldquo;soft narratives&rdquo; (e.g., Net Zero targets). Rather than insulating these streams, the CLO must ensure they are </span><i><span>connected</span></i><span>: validating that current capital expenditure and emissions data actively substantiate the transition story. This alignment mitigates the risk that a discrepancy between promise and performance becomes actionable evidence of misleading conduct.</span></li>
<li><b> Transform Voluntary Supply Chain Data into Binding Contractual Obligations.</b><span> To address the enforcement gap created by extraterritorial regimes like the EU&rsquo;s CSRD, the CLO must systematically update supplier agreements to mandate the provision of supplier data needed by the company to meet its own compliance obligations. This involves shifting from voluntary data requests to binding clauses that include audit cooperation rights and indemnification for data inaccuracies. This ensures the company possesses the legal leverage necessary to obtain the data required for its own compliance, effectively transferring regulatory risk upstream to the source of the emissions.</span></li>
<li><b> Elevate Climate Reporting to a Core Governance Priority.</b><span> The corporate counsel must ensure the Board of Directors exercises its duty of oversight regarding material regulatory risks, including potential market access denials or significant penalties. Best practices should include establishing quarterly dashboards that track readiness for specific reporting deadlines and integrating climate data integrity as a standard component of M&amp;A due diligence. By underscoring that these issues are fiduciary duties, the CLO protects the directors and officers from derivative claims related to oversight failures.</span></li>
</ol>
<p><b>5. Orchestrate Unified Governance Across Legal, Finance, and Sustainability.</b><span> The CLO must act as the cross-functional orchestrator &ndash; engaging with the CFO, CSO and even CPO &ndash; to eliminate data silos that can create exposure to liability, specifically the risk of material omission where internal risk data contradicts public sustainability narratives. By applying financial-grade internal controls to technical emissions data, the legal department ensures consistency across all public disclosures and regulatory filings. Such a unified governance structure prevents discrepancies that often serve as the factual basis for greenwashing litigation.</span></p>
<p><em>Catherine Atkin and <a href="https://www.linkedin.com/in/mjschmitz/" rel="noopener noreferrer" target="_blank">Michael Schmitz</a> are the Co-Chairs of the Stanford CodeX Climate Data Policy Initiative (CDPI)</em>.</p>]]></content>
	<updated>2026-02-17T12:46:18+00:00</updated>
	<author><name>Catherine Atkin, Michael Schmitz</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-17T12:46:18+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="cdpi"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-16:/280137</id>
	<link href="https://www.gautrais.com/conferences/enseignement-du-droit-numerique/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=enseignement-du-droit-numerique" rel="alternate" type="text/html"/>
	<title type="html">Enseignement du droit + numérique, Salon François Chevrette (16 février 2026)</title>
	<summary type="html"><![CDATA[<p>Cette conf&eacute;rence explore les mutations profondes de la p&eacute;dagogie juridique &agrave; l&rsquo;&egrave;re digitale. Commen...</p>]]></summary>
	<content type="html"><![CDATA[<div dir="auto">
<p>Cette conf&eacute;rence explore les mutations profondes de la p&eacute;dagogie juridique &agrave; l&rsquo;&egrave;re digitale. Comment le num&eacute;rique transforme-t-il l&rsquo;apprentissage du droit&nbsp;? Quelles comp&eacute;tences les juristes de demain doivent-ils acqu&eacute;rir pour naviguer dans un environnement technologique complexe&nbsp;?</p>
<p>Venez &eacute;couter trois pan&eacute;listes d&rsquo;exception qui partageront leurs r&eacute;flexions sur ce sujet&nbsp;:</p>
<ul>
<li>
<p><b>Annie Rochette</b>&nbsp;: Experte en p&eacute;dagogie juridique et en approche par comp&eacute;tences, ancienne directrice de l&rsquo;&Eacute;cole du Barreau de la Colombie-Britannique et professeure chevronn&eacute;e, elle a &eacute;galement agi en tant que pr&eacute;sidente de l&rsquo;Association Canadienne des professeur(e)s de droit, r&eacute;dactrice francophone de la Revue femmes et droit et r&eacute;dactrice en chef (et fondatrice) de la Revue l&rsquo;enseignement du droit au Canada. Elle s&rsquo;int&eacute;resse &agrave; la formation des juristes sur tout le continuum, se penchant sur les questions concernant la p&eacute;dagogie, la comp&eacute;tence professionnelle, l&rsquo;approche par comp&eacute;tences, et plus pr&eacute;cis&eacute;ment sur certaines comp&eacute;tences comme la pratique r&eacute;flexive et le d&eacute;veloppement d&rsquo;une identit&eacute; professionnelle.</p>
</li>
<li>
<p><b>Alicia M&acirc;zouz</b>&nbsp;: Docteure en droit, ma&icirc;tresse de conf&eacute;rences et responsable p&eacute;dagogique de la Licence Droit &amp; Culture Juridique (Issy-les-Moulineaux), elle a exerc&eacute; ses activit&eacute;s de recherche et d&rsquo;enseignement au sein de plusieurs universit&eacute;s (Universit&eacute; Paris I Panth&eacute;on-Sorbonne, Universit&eacute; de Cergy-Pontoise, Universit&eacute; Paris-Est Marne-la-Vall&eacute;e) avant d&rsquo;int&eacute;grer la Facult&eacute; libre de Droit en qualit&eacute; de membre permanent. Ses travaux sont marqu&eacute;s par l&rsquo;&eacute;tude du lien entre le corps humain et le droit, mais aussi par l&rsquo;int&eacute;r&ecirc;t pour le droit civil et les sources du droit ainsi que pour les difficult&eacute;s rencontr&eacute;es par le droit lorsqu&rsquo;il se trouve confront&eacute; aux nouvelles technologies.</p>
</li>
<li>
<p><b>Luka Sanchez</b>&nbsp;: Doctorant en droit &agrave; l&rsquo;Universit&eacute; de Montr&eacute;al, ses travaux s&rsquo;int&eacute;ressent &agrave; l&rsquo;enseignement universitaire du droit.</p>
</li>
</ul>
</div>
<div dir="auto">&#128205; En personne &agrave; l&rsquo;Universit&eacute; de Montr&eacute;al (Salon Fran&ccedil;ois-Chevrette &ndash; A-3464);</div>
<div dir="auto">&#128250; Diffusion en direct sur Zoom;</div>
<div dir="auto">&#128351; 16h30 | 1 heure 30 de formation continue reconnue</div>
<div dir="auto"></div>
<div dir="auto">&#128073;<strong>&nbsp;<a href="https://fcdroit.umontreal.ca/Web/MyCatalog/View?id=THbCUYWotMt9Wb18EikKuw%3d%3d&amp;cvState=cvDate=05-02-2026#scrollInscription" rel="noopener noreferrer" target="_blank">Inscription gratuite ici</a></strong></div>
<div dir="auto"></div>
<div dir="auto">
<p><b>Pourquoi participer&nbsp;?</b></p>
<p>L&rsquo;enseignement du droit ne peut plus faire l&rsquo;impasse sur le num&eacute;rique. Cette conf&eacute;rence est l&rsquo;occasion de r&eacute;fl&eacute;chir &agrave; l&rsquo;&eacute;volution de la transmission des savoirs juridiques et de questionner la responsabilit&eacute; des institutions dans la formation de juristes agiles et conscients des enjeux technologiques.</p>
<p>Que vous soyez &eacute;tudiant, professeur, praticien ou chercheur, cet &eacute;v&eacute;nement vous offrira des pistes de r&eacute;flexion essentielles sur l&rsquo;avenir de notre profession.</p>
<p><b>Inscrivez-vous d&egrave;s aujourd&rsquo;hui&nbsp;!</b></p>
</div>]]></content>
	<updated>2026-02-16T22:18:57+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-02-16T22:18:57+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-16:/280127</id>
	<link href="https://law.stanford.edu/2026/02/16/from-principles-to-practice-the-48-controls-that-make-responsible-ai-auditable-defensible-and-real/" rel="alternate" type="text/html"/>
	<title type="html">From Principles to Practice: The 48 Controls That Make Responsible AI Auditable, Defensible, and Real</title>
	<summary type="html"><![CDATA[<p>What is the Controls Table?
The Controls table is one of 13 tables that comprise the AI Life Cycle C...</p>]]></summary>
	<content type="html"><![CDATA[<h3>What is the Controls Table?</h3>
<p>The Controls table is one of 13 tables that comprise the AI Life Cycle Core Principles (AILCCP) framework. (A public-facing version of the AILCCP is available <a href="https://law.stanford.edu/2023/03/17/ai-life-cycle-core-principles/" rel="noopener noreferrer" target="_blank">here</a>.)</p>
<p>The Controls table (currently) contains <strong>48 actionable controls </strong>&mdash;specific mechanisms, policies, and technical safeguards that translate abstract AI principles into concrete, implementable measures. Each control is classified by domain, function, and principle alignment, enabling organizations to systematically operationalize responsible AI governance across the entire system lifecycle. It transforms the AILCCP from a conceptual framework into an operational toolkit.</p>
<p>Note: The number of controls expands as my research advances and evolves.</p>
<h4>Structure at a Glance</h4>
<table>
<thead>
<tr>
<td>Attribute</td>
<td>Coverage</td>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Total Controls</strong></td>
<td>48</td>
</tr>
<tr>
<td><strong>Control Domains</strong></td>
<td>Security, Technical, Governance, Monitoring, Testing &amp; Assurance, Regulatory, Documentation, Safety, Process, Transparency, Maintenance</td>
</tr>
<tr>
<td><strong>Control Functions</strong></td>
<td>Preventive, Detective, Directive, Corrective, Compensating, External Benchmarking</td>
</tr>
<tr>
<td><strong>Principle Linkages</strong></td>
<td>Each control maps to relevant principles (e.g., Security, Accountability, Privacy, Safety)</td>
</tr>
</tbody>
</table>
<h4>Five Practical Use Cases</h4>
<table>
<thead>
<tr>
<td>Use Case</td>
<td>Description</td>
<td>Example Controls</td>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Regulatory Compliance Readiness</strong></td>
<td>Filter controls by domain (e.g., &ldquo;Regulatory&rdquo;, &ldquo;Governance&rdquo;) to identify which mechanisms satisfy EU AI Act, ISO/IEC 42001, or sector-specific requirements. Use the principle alignment field to demonstrate coverage across transparency, accountability, and safety mandates.</td>
<td>Government Issued Permit, Certification, OWASP AI Exchange Compliance</td>
</tr>
<tr>
<td><strong>Security Threat Mitigation</strong></td>
<td>Deploy preventive and detective controls from the Security domain to protect AI systems against adversarial attacks, prompt injection, data poisoning, and model extraction. Map controls to the Security and Privacy principles for audit evidence.</td>
<td>OWASP AI Exchange Compliance, Supply Chain Vetting, Multi-Agent Protocol Security, Confidential Computing Environment</td>
</tr>
<tr>
<td><strong>AI Incident Response Planning</strong></td>
<td>Identify corrective controls (e.g., kill switches, rollback mechanisms) to build incident response runbooks. Link these to Safety and Accountability principles to ensure rapid containment and defensible audit trails.</td>
<td>Agent Kill Switch, Rollback and Quarantine, Rate and Scope Limiter, Intervention Audit Trail</td>
</tr>
<tr>
<td><strong>Board-Level Risk Governance</strong></td>
<td>Use governance and monitoring controls to establish executive oversight cadences, acceptance thresholds, and KPI dashboards. Align with Governance, Accountability, and Metrics principles to support quarterly board reviews.</td>
<td>Acceptance Threshold Governance, Culture &amp; Capability Index, Adoption &amp; Acceptance Forecasting</td>
</tr>
<tr>
<td><strong>Third-Party Vendor Assessment</strong></td>
<td>Apply supply chain and documentation controls when onboarding AI vendors or integrating third-party models. Demonstrate due diligence by linking to Accountability, Security, and Data Stewardship principles.</td>
<td>Supply Chain Vetting, Context-to-Output Lineage, Continuous Validation, Certificatio</td>
</tr>
</tbody>
</table>
<h4>An Operational Toolkit</h4>
<p>The Controls table transforms the AILCCP from a conceptual framework into an operational toolkit.</p>
<p>Organizations can:</p>
<ul>
<li><strong>Trace compliance </strong>from high-level principles down to specific controls and evidence artifacts</li>
</ul>
<ul>
<li><strong>Customize governance </strong>by selecting controls appropriate to their risk profile and regulatory environment</li>
</ul>
<ul>
<li><strong>Demonstrate accountability</strong>&nbsp;through documented control rationales and principle alignments</li>
</ul>
<ul>
<li><strong>Scale responsibly </strong>by applying proportionate controls as AI capabilities evolve</li>
</ul>
<p>This structured approach ensures that responsible AI is not just aspirational&mdash;it is&nbsp;<strong>auditable, defensible, and actionable.</strong></p>
<p></p>]]></content>
	<updated>2026-02-16T16:22:18+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-16T16:22:18+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-16:/280109</id>
	<link href="https://www.gautrais.com/presse/commerce-en-ligne-et-achats-impulsifs-une-mecanique-bien-huilee/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=commerce-en-ligne-et-achats-impulsifs-une-mecanique-bien-huilee" rel="alternate" type="text/html"/>
	<title type="html">Commerce en ligne et achats impulsifs, une mécanique bien huilée (Le Devoir, 14 février 2026)</title>
	<summary type="html"><![CDATA[<p>Bien que le commerce &eacute;lectronique soit commode et facile, il peut aussi nous inciter &agrave; d&eacute;penser sous...</p>]]></summary>
	<content type="html"><![CDATA[<p><strong>Bien que le commerce &eacute;lectronique soit commode et facile, il peut aussi nous inciter &agrave; d&eacute;penser sous l&rsquo;impulsion, une chose que plusieurs regrettent par la suite. Et certains d&eacute;taillants le savent tr&egrave;s bien.</strong></p>
<blockquote><p>La possibilit&eacute; de se procurer en quelques clics des milliers d&rsquo;articles dans le confort de son foyer vaut aux achats en ligne leur popularit&eacute; croissante. Une habitude ciment&eacute;e pendant le confinement pand&eacute;mique. Selon l&rsquo;enqu&ecirc;te NETendances 2024 sur le commerce en ligne de l&rsquo;Acad&eacute;mie de la transformation num&eacute;rique de l&rsquo;Universit&eacute; Laval, environ trois adultes qu&eacute;b&eacute;cois sur quatre (74 %) font de tels achats. C&rsquo;est Amazon qui r&eacute;colte la part du lion&nbsp;: pr&egrave;s de la moiti&eacute; des r&eacute;pondants ont affirm&eacute; y avoir effectu&eacute; au moins 75 % de leurs d&eacute;penses en ligne. Le g&eacute;ant am&eacute;ricain est talonn&eacute; par des entreprises chinoises, comme Temu ou Shein, qui allient de tr&egrave;s bas prix &agrave; un marketing &laquo;&nbsp;particuli&egrave;rement agressif&nbsp;&raquo;, indique le document.</p></blockquote>
<h4><a href="https://www.ledevoir.com/economie/consommation/955653/commerce-ligne-achats-impulsifs-mecanique-bien-huilee" rel="noopener noreferrer" target="_blank">Pour en savoir +</a></h4>]]></content>
	<updated>2026-02-14T14:06:45+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-02-14T14:06:45+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-13:/279887</id>
	<link href="https://law.stanford.edu/2026/02/13/neuroimaging-evidence-in-criminal-cases/" rel="alternate" type="text/html"/>
	<title type="html">Neuroimaging Evidence in Criminal Cases</title>
	<summary type="html"><![CDATA[<p>Imagine you&rsquo;re a juror in a murder trial. The defense attorney wheels in a large monitor displ...</p>]]></summary>
	<content type="html"><![CDATA[<p>Imagine you&rsquo;re a juror in a murder trial. The defense attorney wheels in a large monitor displaying colorful brain scans of the defendant. An expert witness points to areas highlighted in blue and red, explaining that these images show abnormalities consistent with schizophrenia. The attorney argues that these scans prove the defendant couldn&rsquo;t tell right from wrong when he committed the crime. As you look at the images, you can&rsquo;t help but notice how scientific, authoritative, and compelling they appear. After all, you&rsquo;re looking directly at their brain.</p>
<p>But will you or any other juror make the &ldquo;right&rdquo; call when faced with information they may not fully understand?</p>
<h3><strong>Actus Reus and Mens Rea</strong></h3>
<p>Before diving into brain scans, it&rsquo;s important to understand two fundamental principles that typically determine criminal liability in the United States: <em>actus reus</em> and <em>mens rea</em>.</p>
<p><em>Actus reus</em> (Latin for &ldquo;guilty act&rdquo;) refers to the physical act of committing a crime. This is usually the more straightforward element to prove, because it is typically based on objective action, such as pulling the trigger, taking property, or striking the victim.[1]</p>
<p><em>Mens rea</em> (Latin for &ldquo;guilty mind&rdquo;) refers to the mental state or intent behind the act. This is the harder element to prove. It often looks at considerations such as whether the defendant intended to act or whether it was an unfortunate accident, and whether they knew or should have known their conduct was wrong.[2] Different crimes require different levels of intent. Murder typically requires intent to kill, while manslaughter might involve recklessness rather than specific intent.[3]</p>
<p>Both elements must typically be present for someone to be found guilty of a crime. The state can&rsquo;t convict someone of first-degree murder simply because they caused a death; the prosecution must also adequately prove they had the requisite mental state.</p>
<p>Attorneys have started using neuroimaging as evidence to argue that abnormal brain scans demonstrate that the killer lacked the mental capacity to form this necessary intent, or that they couldn&rsquo;t distinguish right from wrong due to mental illness. In essence, lawyers are trying to use neuroscience to prove their client lacked <em>mens rea</em> for a given criminal charge.[4]</p>
<h3><strong>How Courts Decide Whether to Admit Brain Scans as Evidence</strong></h3>
<p>When a party wants to introduce scientific or technical evidence like brain scans, courts don&rsquo;t simply accept it at face value. After all, jurors likely don&rsquo;t have the expertise needed to make credibility determinations when it comes to neuroimaging. In 1993, the U.S. Supreme Court decided the criteria for expert testimony in a case called <em>Daubert v </em><em>Merrell Dow Pharmaceuticals, Inc.</em>. The <em>Daubert </em>Court noted that experts must testify to scientific knowledge that will assist a jury to better understand the facts.[5] If so, the scientific evidence must be reliable, as shown through testing, peer review and publication, potential rate of error, and general acceptance in the relevant scientific community.[6]</p>
<p>The <em>Daubert </em>court also referred to Federal Rule of Evidence 702 and 403.[7] Federal Rule of Evidence 702 requires that expert testimony be helpful to the jury, based on sufficient facts and data, properly tested with scientific methods, and appropriately applied to the case at hand.[8] Federal Rule of Evidence 403 allows evidence to be thrown out if it might mislead the jury, among other criteria such as wasting the jury&rsquo;s time.[9]</p>
<p>In 2012, the Sixth Circuit used the <em>Daubert </em>criteria in a case called <em>United States v. Semrau</em>. In <em>Semrau,</em> the defendant was a doctor who was charged with healthcare fraud and attempted to introduce functional Magnetic Resonance Image (fMRI) test results showing he was &ldquo;generally truthful&rdquo; when claiming he tried to follow proper billing practices in good faith.[10] An fMRI is a technique for measuring changes in blood oxygenation and flow in the brain, which occurs in response to neural activity.[11] However, the signal is nonspecific, since it&rsquo;s an average of millions of cells.[12] This means that it cannot easily differentiate between specific lobes of the brain, nor can it tell us exactly what a person is thinking.</p>
<p>Ultimately, the <em>Semrau </em>court decided to exclude this evidence due to reliability problems. One of the reasons was because the expert witness noted that fMRI lie detection had &ldquo;a huge false positive problem,&rdquo; where truth-tellers were incorrectly identified as liars 60-70% of the time.[13] A study in 2009 asked participants to commit a &ldquo;mock crime&rdquo; or stealing and damaging CDs, and reported that fMRI may have high sensitivity, but low specificity.[14] The study noted that this result meant an fMRI test may be helpful to &lsquo;&lsquo;rule out&rsquo;&rsquo; an innocent suspect, but not very helpful in &lsquo;&lsquo;ruling in&rsquo;&rsquo; a guilty suspect.[15]</p>
<h3><strong>Brain Scans to Prove Lack of Criminal Responsibility</strong></h3>
<p>In <em>Commonwealth v. Chism</em>, decided in 2025 by the Massachusetts Supreme Judicial Court, the defense of a 14-year-old brought in a structural MRI (sMRI) brain scan containing detailed images of his brain&rsquo;s anatomy.[16] The scans show volumetric abnormalities (differences in the size of certain brain structures) consistent with schizophrenia. When the defendant committed first-degree murder, aggravated rape, and armed robbery, his lawyer used brain scan evidence to argue that the 14-year-old couldn&rsquo;t understand right from wrong due to mental illness.[17]</p>
<p>The court relied on a 2014 multidisciplinary consensus report from Emory University, which concluded that &ldquo;the practice of performing imaging studies on a defendant in order to shed light on brain function or state of mind at the time of a prior criminal act is problematic.&rdquo;[18] The key reason is because a brain scans taken months or years after a crime occurred cannot tell us what was happening in the defendant&rsquo;s brain at the moment they committed the criminal act.[19] The court also noted methodological issues because the control group (the &ldquo;normal&rdquo; brains the defendant&rsquo;s scans were compared against) wasn&rsquo;t age-matched to the 14-year-old defendant, making the comparison scientifically questionable.[20]</p>
<h3><strong>What This Means for the Future</strong></h3>
<p>Does this mean neuroimaging evidence will never be admissible in criminal cases? Not necessarily. Courts seem to have left open the possibility that as science advances and gains broader acceptance, such evidence might meet admissibility standards in the future.</p>
<p>However, the timing issue remains. Criminal law asks whether a defendant had a particular mental state at a specific moment in the past, while neuroimaging shows us what a brain looks like now or how it responds to stimuli in a current testing situation. Bridging that temporal gap requires scientific advances that don&rsquo;t yet exist.</p>
<p>As neuroscience continues to advance, courts will likely continue to grapple with how and whether brain imaging should influence criminal responsibility. As legal scholar Francis Shen notes, the goal is not to wait for &ldquo;magical tools,&rdquo; but to adopt an entrepreneurial &ldquo;What now?&rdquo; mentality.[21] Perhaps the way forward is to define clear expert witness guidelines for what type of neuroimaging can be used in the courtroom, or to create better jury instructions that lead to a rightfully skeptical jury.</p>
<p>Understanding these issues will affect how we balance scientific advancement with legal protections, how we determine criminal responsibility, and ultimately, how we define what it means to have a &ldquo;guilty mind&rdquo; in an age where we can peer inside the brain itself.</p>
<h3><strong>References</strong></h3>
<p>[1] Uri Maoz &amp; Gideon Yaffe, <em>What Does Recent Neuroscience Tell Us About Criminal Responsibility?</em>, 3 J.L. &amp; Biosciences 120, 122 (2015).</p>
<p>[2] <em>Id.</em> at 122&ndash;23.</p>
<p>[3] <em>Id.</em> at 130.</p>
<p>[4] Neal Feigenson, <em>Brain Imaging and Courtroom Evidence: On the Admissibility and Persuasiveness of fMRI</em>, 2 Int&rsquo;l J. L. Context 233, 234 (2006).</p>
<p>[5] <em>Daubert v. Merrell Dow Pharms., Inc.</em>, 509 U.S. 579, 588 (1993).</p>
<p>[6] <em>Id.</em> at 593&ndash;95.</p>
<p>[7] <em>Daubert</em>, 509 U.S. 594-95.</p>
<p>[8] Fed. R. Evid. 702.</p>
<p>[9] Fed. R. Evid. 403.</p>
<p>[10] <em>United States v. Semrau</em>, 693 F.3d 510, 515 (2012).</p>
<p>[11] Nikos K. Logothetis, <em>What We Can Do and What We Cannot Do With fMRI</em>, 453 Nature 869, 869 (2008).</p>
<p>[12] <em>Id.</em> at 876.</p>
<p>[13] <em>Semrau</em>, 693 F.3d 518.</p>
<p>[14] F. Andrew Kozel et al., <em>Functional MRI Detection of Deception After Committing a Mock Sabotage Crime</em>, 54 J. Forensic Sci. 220, 228 (2009).</p>
<p>[15] <em>Id. </em></p>
<p>[16] <em>Commonwealth v. Chism</em>, 495 Mass. 358, 360, 370&ndash;71 (2025).</p>
<p>[17] <em>Id.</em></p>
<p>[18] <em>Id.</em> at 376&ndash;77.</p>
<p>[19] <em>Id.</em> at 376.</p>
<p>[20] <em>Id.</em></p>
<p>[21] Francis X. Shen, <em>Law and Neuroscience 2.0</em>, 48 Ariz. St. L.J. 1043, 1085 (2016).</p>]]></content>
	<updated>2026-02-13T19:49:16+00:00</updated>
	<author><name>Katherine Wu</name></author>
	<source>
		<id>https://law.stanford.edu/blog/lawandbiosciences/</id>
		<link rel="self" href="https://law.stanford.edu/blog/lawandbiosciences/"/>
		<updated>2026-02-13T19:49:16+00:00</updated>
		<title>Law and Biosciences Blog - Stanford Law School</title></source>

	<category term="criminal law"/>

	<category term="evidence"/>

	<category term="litigation"/>

	<category term="neuroimaging"/>

	<category term="neurolaw"/>

	<category term="neuroscience"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-12:/279728</id>
	<link href="https://www.gautrais.com/conferences/formaliser-linformel-le-dictionnaire-de-la-norme/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=formaliser-linformel-le-dictionnaire-de-la-norme" rel="alternate" type="text/html"/>
	<title type="html">Formaliser l&amp;#8217;informel: le dictionnaire de la norme, Formaliser l&#039;informel: le dictionnaire de la norme, Faculté de droit - Ottawa (11 février 2026)</title>
	<summary type="html"><![CDATA[<p>Le num&eacute;rique en particulier, et les domaines techniques en g&eacute;n&eacute;ral, disposent d&rsquo;un cadre normatif pl...</p>]]></summary>
	<content type="html"><![CDATA[<p><span dir="ltr" lang="FR-FR" xml:lang="FR-FR">Le num&eacute;rique en particulier, et les domaines techniques en g&eacute;n&eacute;ral, disposent d&rsquo;un cadre normatif pluri-&eacute;tag&eacute; qui mobilise d&eacute;sormais une multitude d&rsquo;appellations. Derri&egrave;re des termes tels que gouvernance, cor&eacute;gulation, code de conduite, normes techniques, et tant d&rsquo;autres, c&rsquo;est plus d&rsquo;une cinquantaine de termes associ&eacute;s aux processus normatifs qui ne manquent pas de poindre &ccedil;&agrave; et l&agrave;.&nbsp;</span></p>
<p><span dir="ltr" lang="FR-FR" xml:lang="FR-FR">Derri&egrave;re cette multiplicit&eacute; de terminologies souvent employ&eacute;es, rarement expliqu&eacute;es, il importait de tenter de mettre de l&rsquo;ordre mais aussi se questionner sur une mani&egrave;re de faire qui s&rsquo;est impos&eacute;e sans que l&rsquo;on se questionne parfois sur les cons&eacute;quences sociales, politiques, &eacute;conomiques. Le droit a chang&eacute;&nbsp;; et bien au-del&agrave; des usages et coutumes, expressions traditionnelles du domaine juridique, il nous importait de pr&eacute;ciser ces concepts en &eacute;mergence. Dans la tradition des dictionnaires, la conf&eacute;rence pr&eacute;sentera ce travail collectif en cours d&rsquo;&eacute;laboration.</span></p>]]></content>
	<updated>2026-02-12T04:01:56+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-02-12T04:01:56+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-11:/279667</id>
	<link href="https://law.stanford.edu/2026/02/11/context-stewardship-what-source-by-source-authorization-misses/" rel="alternate" type="text/html"/>
	<title type="html">Context Stewardship: What Source-by-Source Authorization Misses</title>
	<summary type="html"><![CDATA[<p>Abstract
AI systems access data. That access is governed. Each source is reviewed, scoped, and autho...</p>]]></summary>
	<content type="html"><![CDATA[<h3>Abstract</h3>
<p>AI systems access data. That access is governed. Each source is reviewed, scoped, and authorized. The governance frameworks that manage this process are mature and well understood. But AI systems do not merely access data. They combine it. And the combination produces something that no individual authorization addressed: inference. An AI that reads a calendar, an email account, and a CRM can derive a health condition that exists in none of those sources individually. The inference was never permitted. It was never prevented. It sits in a gap that current frameworks do not recognize. This essay introduces Context Stewardship, an interpretive lens within the AI Life Cycle Core Principles framework, to name that gap and propose controls for it. Context Stewardship reframes six existing governance principles around a single question: what risks emerge when authorized data sources are combined? It introduces Inference Boundary Mapping, a documented specification that defines what an AI system may and may not derive from combined context. The argument is that data access is not the right unit of governance for AI systems that synthesize across sources. Data combination is. Until frameworks reflect that distinction, organizations will continue to govern what AI can reach while ignoring what it can infer.</p>
<h3>Between Access and Inference</h3>
<p>An AI reads your calendar. That is authorized.</p>
<p>It reads your email. No problem. That is authorized too.</p>
<p>It queries your company&rsquo;s CRM. Authorized, for certain fields.</p>
<p>Each access was scoped. Each was reviewed. Each was approved under data control frameworks designed to ensure that AI systems reach only the data they are permitted to reach.</p>
<p>But no one authorized what happens next. The AI combines what it found. A medical appointment on the calendar. Test results mentioned in an email thread. An insurance tier in the CRM. From these, it derives a health condition. No single source contained that information. No single authorization contemplated it. The inference was never permitted. It was simply never prevented.</p>
<p>This is the gap that current data governance frameworks do not see because they govern access. They do not govern synthesis. They ask whether the AI may reach a data source but not what becomes possible when authorized sources are combined.</p>
<p>This gap, between source-level authorization and synthesis-level inference, represents the most under addressed data governance risk in enterprise AI deployment today. Context Stewardship, which I explain in more detail below, is designed to effectively address it.</p>
<p>Context Stewardship is part of the AI Life Cycle Core Principles (AILCCP) framework. The AILCCP is a system of 37 principles organized across 10 strategic pillars, designed to guide organizations through the full life cycle of AI development, deployment, and decommissioning. Now in its third year of development, the AILCCP spans areas from oversight and accountability to reliability and robustness. (A public-facing version is available <a href="https://law.stanford.edu/2023/03/17/ai-life-cycle-core-principles/" rel="noopener noreferrer" target="_blank">here</a>.) It functions as both a compliance roadmap and a strategic tool for managing the legal, ethical, and reputational risks that accompany AI adoption. Context Stewardship is one of the latest additions to the framework. It is not a standalone principle, but an interpretive lens that cuts across six AILCCP principles (which appear in initial uppercase letter): Governance, Data Stewardship, Privacy, Consent, Security, and Resilience. It reframes each of these around a single question: what risks emerge from combining authorized data sources that none of those sources would present alone?</p>
<h3>The Combinatorial Problem</h3>
<p>The authorization pattern I described above is how most organizations operate. Each approval in that sequence was evaluated independently but no one asked what the combination would make possible.</p>
<p>This reflects a structural limitation in existing frameworks. They were designed for a world in which systems accessed data sources one at a time. They lack vocabulary for what happens when an AI agent moves across sources, synthesizing context as it goes. A system with access to three enterprise sources has a manageable set of possible inferences. A system with access to fifteen has a set that grows far faster than the number of sources, because every new source interacts with all the existing ones. No individual review of each source can meaningfully cover that space.</p>
<p>The health inference I opened with is simple by design. In practice, the inferences available from combined enterprise data are far more varied and far harder to anticipate. Consider a second example. An AI assistant authorized to access a company&rsquo;s code repository notices that a senior engineer&rsquo;s commit frequency has dropped. Individually, that data point means little. People take vacations. They shift to architecture work. They mentor junior staff. But the same AI also has access to the employee&rsquo;s calendar, where it sees several midday blocks marked &ldquo;personal.&rdquo; And it has access to the internal wiki, where it can see that the engineer recently viewed pages on equity vesting schedules and the company&rsquo;s non-compete policy. No single source signals anything. Combined, the AI infers departure risk. It was never asked to. No authorization contemplated this inference. But nothing prevented it either.</p>
<p>Of course, not every combination of data sources produces problematic inferences. Many combinations are benign or beneficial and mature organizations do attempt (at least on paper) cross-system risk analysis, tiering AI access by data sensitivity, decision impact, and safety vulnerabilities. Some security teams also treat AI agents as distinct entities with their own access credentials, applying zero-trust principles (the assumption that no entity is trusted by default, and every access request must be verified and scoped) to limit what each agent can reach. But these processes are emerging, they are not<span>&nbsp; </span>standard, and even where they exist, they tend to operate at the access level rather than the inference level.</p>
<h3>How Context Stewardship Reframes Existing Principles</h3>
<p>Closing this gap requires reframing the questions those frameworks ask. Context Stewardship does this by cutting across six AILCCP principles and shifting each from source-level compliance to synthesis-level risk.</p>
<p>Three of these reframings carry the most weight.<span>&nbsp;</span></p>
<p><b>Privacy</b> frameworks typically evaluate exposure at the source level. Context Stewardship treats aggregated context as an expanded area of vulnerability. Each additional source the AI can access opens new categories of inference that were unavailable from any individual source. The health inference from the opening illustrates this at a basic level. The departure risk example complicates it further: there, the privacy risk arises not from obviously sensitive data but from the combination of routine signals that no one would think to restrict individually.<span>&nbsp;</span></p>
<p><b>Consent</b> mechanisms inform users about what each system collects, but they never make it clear that an AI drawing on multiple sources can derive things from their combination that no individual consent addressed. Consenting to an AI assistant that reads your calendar is one thing. Understanding that the same assistant cross-references your calendar with your email, your documents, and your CRM records is quite another.</p>
<p><b>Security</b> controls enforce least-privilege access to individual resources. Context Stewardship introduces a parallel concept: least-context-necessary. An AI that has access to ten enterprise sources but only needs three for a given task should be constrained to those three. Not because the other seven are sensitive in isolation, but because their addition opens inferential possibilities that may be unnecessary and ungoverned.</p>
<p>The remaining three principles are also reframed.<span>&nbsp;</span></p>
<p><b>Data Stewardship</b> shifts from data quality and provenance to scope authorization, asking (in relevant part) whether access was granted with awareness of how sources would combine.<span>&nbsp;</span></p>
<p><b>Resilience</b> takes on new failure modes specific to synthesized environments: stale data from one source contradicting current data from another, compromised data from one source propagating through the AI&rsquo;s reasoning, or temporal mismatches creating a distorted picture that the AI treats as coherent.<span>&nbsp;</span></p>
<p>And <b>Governance</b> at the organizational level must answer who bears responsibility for risks that arise from integration decisions, particularly when no single source owner anticipated them.</p>
<h3>Inference Boundary Mapping</h3>
<p>Among the controls accompanying Context Stewardship, one addresses what I believe is the most practically urgent problem: specifying what an AI system may derive from combined context.</p>
<p>The concept is straightforward. If an organization authorizes an AI to access both calendar data and email data, Inference Boundary Mapping asks what inferences from that combination are within scope and which are not. A scheduling optimization that combines calendar availability with email response patterns may be entirely appropriate. A health status inference derived from medical appointment entries and insurance correspondence is not.</p>
<p>In practice, Inference Boundary Mapping takes the form of a documented specification, developed before deployment and revisited periodically, that defines permissible inferences for each combination of data sources the AI can access. Scope is defined by use case: an AI authorized to assist with scheduling has a different inference boundary than one authorized to assist with workforce planning, even if both access the same underlying sources. The exercise requires a cross-functional team. Technical staff identify what inferences the AI is capable of drawing from combined sources. Legal and compliance staff determine which of those inferences fall within the purpose for which access was granted. The resulting specification becomes a governance artifact, auditable and enforceable, that sits between source-level authorization and system-level behavior.</p>
<p>But the harder cases are less obvious. Consider an AI with access to email, meeting transcripts, project management tools, and peer review records. It could construct a detailed performance profile by correlating communication patterns, meeting participation, task completion rates, and peer feedback. Each of these sources was authorized for the AI to help with workflow management. The performance profile that emerges from their combination was not. Yet unlike the health inference, the line here is blurry. Correlating task completion with meeting load to suggest workload redistribution seems helpful. Correlating email tone with peer review sentiment to predict a performance rating seems invasive and creepy. The underlying data is the same. The difference is in what the AI is permitted to derive from it, and that distinction exists nowhere in a source-level authorization framework. Inference Boundary Mapping makes that distinction explicit and assigns ownership to it.</p>
<p>None of this is easy. An AI system with access to fifteen data sources can draw inferences that no pre-deployment review will fully anticipate. Defining boundaries in advance requires judgment about what an AI might derive from data combinations, and that judgment will sometimes be wrong or incomplete. But the alternative, allowing AI systems to derive whatever their combined context permits, transfers risk from a design decision to an unexamined default. Organizations that make that transfer without acknowledging it may find themselves unable to explain, after the fact, why a particular inference was generated and who authorized it. That is a liability exposure, not merely a compliance gap.</p>
<h3>Beyond Enterprise Data: A Structural Pattern</h3>
<p>Everything I have described so far concerns enterprise data. But the same structural pattern appears wherever AI systems draw on multiple data sources to build an integrated picture of their environment.</p>
<p>AI research is also taking aim at systems that learn internal representations of how the world works, built from training data drawn from multiple sources. These learned representations drive predictions, and those predictions drive actions. The training data that shapes these representations raises the same questions that Context Stewardship asks of enterprise AI. Were the sources selected with awareness of how they would interact? Could the system draw inferences from combined training data that no one intended? When the system produces a flawed prediction, can responsibility be traced back to specific sources?</p>
<p>The pattern is consistent. Whenever multiple sources are synthesized, the whole exceeds what any individual source authorization contemplated. That is true whether the synthesis happens in an enterprise assistant combining calendar and email data or in a research system combining training datasets to build a model of its environment.</p>
<h3>The Regulatory Gap</h3>
<p>Current and proposed AI legislation does not adequately address combinatorial inference risk. Transparency mandates focus on disclosing what data a system accesses and how it processes that data. Impact assessments, which are increasingly required by state consumer privacy law, evaluate risks at the system level or the data source level. Neither is designed to capture risks that emerge specifically from synthesis across individually authorized sources.</p>
<p>This matters increasingly as AI systems become more deeply embedded in enterprise operations. The number of data sources they access will grow. The inferential possibilities will expand. Legislative frameworks that treat data access authorization as the primary safeguard will increasingly miss where the actual risk resides.</p>
<p>I do not think this requires entirely new legislation. In many cases, existing regulatory frameworks could accommodate combinatorial risk analysis if regulators recognized the category. What is missing is analytical vocabulary. Context Stewardship, and controls like Inference Boundary Mapping, are an attempt to provide it.</p>
<h3>What This Means in Practice</h3>
<p>For legal counsel advising on AI integration, the practical implication is this: reviewing AI data access authorizations one source at a time is necessary but insufficient. The review should also ask what becomes possible when authorized sources are combined, and whether the organization has mechanisms to govern those possibilities.</p>
<p>For policy makers, the implication is that transparency and access control mandates, while valuable, do not reach the synthesis layer. The combinatorial risks I described are present in every enterprise AI deployment that accesses more than one data source. They are not speculative.</p>
<p>For organizations deploying AI agents in enterprise environments, the question is concrete: when your AI system derives something from combined data that no individual authorization addressed, who is accountable? If the answer is unclear, the organization has a gap that no amount of source-level compliance will close.</p>
<p>Context Stewardship does not claim to solve every problem that enterprise AI creates. It identifies the specific layer where source-by-source authorization fails and proposes both an analytical framework and practical controls to address it. The AILCCP was designed to evolve as AI deployment reveals new categories of risk. Context Stewardship represents that evolution.</p>]]></content>
	<updated>2026-02-11T15:45:46+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-11T15:45:46+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-08:/279363</id>
	<link href="https://law.stanford.edu/2026/02/08/built-by-agents-tested-by-agents-trusted-by-whom/" rel="alternate" type="text/html"/>
	<title type="html">Built by Agents, Tested by Agents, Trusted by Whom?</title>
	<summary type="html"><![CDATA[<p>On February 6, 2026, StrongDM&rsquo;s AI team published a manifesto. Three engineers described a &ldquo;Software...</p>]]></summary>
	<content type="html"><![CDATA[<p>On February 6, 2026, StrongDM&rsquo;s AI team published a manifesto. Three engineers described a &ldquo;Software Factory&rdquo; where coding agents write, test, and ship production software. No human writes code. No human reviews code. The humans design specifications, curate test scenarios, and watch the scores. The agents do everything else.</p>
<p>This is not a research prototype. <a href="https://www.strongdm.com/" rel="noopener noreferrer" target="_blank">StrongDM</a> builds access management and security software. Pause on that. A team building <i>security infrastructure</i> has decided that human code review is an obstacle, not a safeguard. They are not alone. Dan Shapiro&rsquo;s <a href="https://www.danshapiro.com/blog/2026/01/the-five-levels-from-spicy-autocomplete-to-the-software-factory/" rel="noopener noreferrer" target="_blank">five-level taxonomy of AI-assisted programming</a>, published weeks earlier, places this approach at &ldquo;Level 5: The Dark Factory.&rdquo; The term borrows from manufacturing, where robots work in unlit facilities because robots do not need to see.</p>
<p>I think this development is more consequential than it appears. It is not merely a story about productivity. It inverts how we assign responsibility for software behavior. Existing regulatory frameworks are not prepared for it.</p>
<p><b>The Inversion</b></p>
<p>StrongDM&rsquo;s charter contains two rules: &ldquo;Code must not be written by humans&rdquo; and &ldquo;Code must not be reviewed by humans.&rdquo; Their CTO, Justin McCarthy, offers a benchmark: &ldquo;If you haven&rsquo;t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement.&rdquo;</p>
<p>Why does this work at all? Consider the trajectory. In 2024, models like Claude 3.5 Sonnet and later updates substantially improved at coding tasks, especially when used in agentic workflows over long contexts. By late 2025, newer systems from Anthropic, OpenAI, and others made it routine for many engineers to rely on AI to draft and refactor large portions of production code, with human effort shifting toward architecture, safety, and integration review. By November 2025, newer models from Anthropic and OpenAI made AI-written code reliable enough that the question shifted from &ldquo;can agents write code?&rdquo; to &ldquo;why are humans still writing code?&rdquo;</p>
<p>This is a textbook example of what Ray Kurzweil calls the Law of Accelerating Returns (the observation that technological progress follows exponential curves, but humans consistently misjudge the pace because we instinctively extrapolate in linear fashion). The exponential curve here is not raw compute. It is model reliability on complex, multi-step tasks. Each generation of model compounds the gains of the last. The shift from human verification to machine-driven validation happened faster than almost anyone predicted, and it will keep accelerating.</p>
<p>But speed raises an alignment question that Stuart Russell has spent decades studying (his work on AI alignment focuses on the gap between what we tell machines to optimize and what we actually want). What are these agents trying to do? The answer is: pass the tests. Not &ldquo;build good software.&rdquo; Not &ldquo;serve the user.&rdquo; Pass the tests. That&rsquo;s it. StrongDM learned this the hard way. Their agents <a href="https://factory.strongdm.ai/" rel="noopener noreferrer" target="_blank">wrote return true</a>, which passes any test beautifully and does nothing useful.</p>
<p><b>How Do You Know It Works?</b></p>
<p>Instead of checking whether code passes or fails a fixed set of tests, StrongDM wrote detailed descriptions of how a real customer would actually use the software, step by step. They kept these descriptions hidden from the agents, so the agents could not simply memorize the answers. Then they asked a different question than traditional testing asks. Instead of &ldquo;does it pass?&rdquo;, they asked &ldquo;if a real person used this software in all the ways a real person might, how often would it actually do what they needed?&rdquo;</p>
<p>The <a href="https://law.stanford.edu/2023/03/17/ai-life-cycle-core-principles/" rel="noopener noreferrer" target="_blank">AI Life Cycle Core Principles</a> (AILCCP) framework&rsquo;s Metrics principle warns against exactly this kind of substitution unless done carefully. (Note: AILCCP principles appear in initial uppercase.) The economist Charles Goodhart observed in 1975 that when a measure becomes a target, it ceases to be a good measure. Tell an agent to maximize a test score and it will maximize the test score, whether or not the underlying software actually works. StrongDM&rsquo;s satisfaction metric is clever, but it uses AI-as-judge. This creates a circularity: the same class of technology that writes the code also decides whether the code works. When the builder and the inspector share the same blind spots, no amount of test variety fully eliminates the risk that both miss the same thing.</p>
<p>The Accuracy principle sharpens this. It requires that AI system performance match what developers and vendors claim, and that the system employ ongoing testing and self-correction. StrongDM does test continuously. But the tests are run by systems with the same limitations as the systems being tested. When a human writes a test, the human brings different assumptions, different mistakes, and different oversights than the person who wrote the code. That mismatch is what makes testing useful. When the same AI model writes the code and evaluates it, that mismatch shrinks.</p>
<p>This is Russell&rsquo;s alignment problem. The agents are not trying to satisfy users. They are trying to score well on a test that is supposed to represent user satisfaction. Those are different things. A clever enough agent will find ways to ace the test without actually doing what users need. The &ldquo;return true&rdquo; episode was a crude version. Subtler versions will be harder to catch.</p>
<p><b>The Digital Twin Universe and the Economics of Impossible Things</b></p>
<p>The most creative element of StrongDM&rsquo;s approach is what they call the Digital Twin Universe. They built working replicas of Okta, Jira, Slack, Google Docs, Google Drive, and Google Sheets, mimicking their interfaces, edge cases, and behaviors. Against these replicas, they run thousands of test scenarios per hour. No rate limits. No API costs. No risk of breaking real services.</p>
<p>McCarthy frames this as an economic inversion, and the evidence supports him. Building a faithful replica of a major SaaS product was always technically possible. It was never worth the cost. Engineers did not even propose it because they already knew the answer. Then the cost of writing software collapsed. What was unthinkable six months ago is now routine.</p>
<p>This is Kurzweil&rsquo;s exponential logic again, applied to economics rather than capability. When a technology crosses a cost threshold, investments that were irrational yesterday become obvious today, and the unlocked capabilities cascade. The Digital Twin Universe is not just a testing technique. It is proof that the economics of software have changed in kind, not merely in degree. If you can clone Okta&rsquo;s API in hours rather than months, the limit on software quality is no longer cost. It is imagination.</p>
<p>But the AILCCP framework&rsquo;s Accountability principle asks a harder question. When software is &ldquo;grown&rdquo; rather than written, when replicas stand in for real services, and when quality is measured by probability rather than certainty, who is responsible for what comes out? The principle requires (among other things) that output be &ldquo;traceable to an appropriate responsible party&rdquo; and that there be &ldquo;zero gap between AI system behavior and deployer&rsquo;s liability.&rdquo;</p>
<p>StrongDM&rsquo;s architecture makes tracing difficult by design. No human reviewed the code that produced a given output. No human wrote the test that validated it. No human built the replica against which it was tested. The humans designed the system that designed the system. Existing legal frameworks assume someone, somewhere, looked at the work. Here, nobody did.</p>
<p><b>What Happens to the Engineers?</b></p>
<p>StrongDM&rsquo;s team is three engineers who started in July 2025. By October, when <a href="https://simonwillison.net/2026/Feb/7/software-factory/" rel="noopener noreferrer" target="_blank">Simon Willison visited</a>, they already had working demos of the system that manages their coding agents, their Digital Twin Universe, and their satisfaction testing framework. Three people, three months.</p>
<p>That speed raises a question the AILCCP&rsquo;s Workforce Compatible principle is designed to surface: does this technology augment human expertise, or does it replace it? StrongDM&rsquo;s model does not augment software engineering as traditionally understood. It replaces it with something else. The humans in a Software Factory write specifications, design scenarios, and architect systems. They do not program. The skill of reading and writing code, the bedrock of software engineering for seventy years, becomes unnecessary. This is Shapiro&rsquo;s Level 5, the &ldquo;Dark Factory,&rdquo; where the human role shifts entirely from building software to designing and monitoring the systems that build software. The lights are off because nobody needs to see.</p>
<p>The same principle asks a follow-on question: as the old skills fade, does meaningful oversight survive? StrongDM says oversight moves from reviewing code to designing scenarios and monitoring satisfaction. That may prove sufficient. It is also the kind of arrangement where confidence builds gradually, scrutiny fades, and the skills needed to catch a serious failure quietly disappear.</p>
<p><b>Regulatory Implications</b></p>
<p>So what happens when something goes wrong? Regulation in software has always been reactive. It responds to harm after the fact. The AILCCP framework cannot change that, but it can identify where the gaps are before they produce failures. Three stand out: nobody knows who is liable, nobody knows what to disclose, and the contracts have not caught up.</p>
<p>Accountability in software has historically (and to this day) worked through product liability, professional licensing, and contractual warranties. None of these contemplate software that no human has reviewed. The FTC&rsquo;s enforcement actions have focused on deceptive marketing and consumer protection. But a Software Factory producing security infrastructure raises different questions entirely. If an access management system fails because an agent-written module contained a subtle error that no human ever saw, who is liable? The three engineers who designed the architecture? The AI provider whose model generated the code? The company that sold the product?</p>
<p>The liability question is hard enough. The disclosure question may be worse. When a customer asks &ldquo;how was this software built?&rdquo; the truthful answer is: &ldquo;Coding agents wrote it. Other agents tested it against replicas of your services. Satisfaction scores exceeded our threshold.&rdquo; Most procurement officers, auditors, and regulators have no way to evaluate that answer. But the problem runs deeper than unfamiliarity. Even if they understood, they would have no framework for deciding whether the answer is acceptable. No industry standard defines what a sufficient satisfaction score looks like. No audit methodology covers agent-built software tested against replicas. No procurement checklist asks whether the vendor&rsquo;s coding agents share blind spots with the vendor&rsquo;s testing agents. The disclosure is technically accurate and practically useless, not because the listener is unsophisticated, but because the tools for making sense of it do not exist yet.</p>
<p>And here is the quiet (or loud) absurdity that deserves attention. Open the terms of service for any AI-built product shipping today. Read the galactic warranty disclaimers, the limitation-of-liability clauses, the &ldquo;AS IS&rdquo; language. You will find them virtually identical to the terms that have accompanied software for decades. The same boilerplate that disclaimed liability when dozens of engineers wrote and reviewed every line now disclaims liability when no human has looked at the code at all. The contractual wrapper has not changed while the thing inside the wrapper has. A limitation-of-liability clause drafted for software built by humans, tested by humans, and reviewed by humans is now quietly absorbing the risk of software that was none of those things. Nobody updated the contract because the contract was never designed to describe how the software was made. It was designed to limit&ndash;or more accurately&ndash;extinguish what happens when the software breaks. And so the same language that once disclaimed imperfection in a human process now disclaims the absence of a human process entirely.</p>
<p>That gap between what the product is and what the contract says creates a credibility problem the AILCCP&rsquo;s Trustworthy principle identifies directly: blanket disclaimers that contradict a vendor&rsquo;s own trust claims destroy the trust they are trying to build. Try telling an enterprise customer that your software was never reviewed by a human. Then hand them the same limitation-of-liability clause their vendor used in 1996.</p>
<p>But perhaps this is transitional. The Software Factory represents such a thorough departure from conventional development that it might eventually produce an entirely new contractual form. A vendor confident enough to eliminate human code review might also be confident enough to offer terms that reflect what the product actually is: a warranty tied not to human inspection but to satisfaction scores, scenario coverage, or Digital Twin fidelity, with disclosures covering the agent architecture, the testing methodology, and the threshold at which the vendor considers the software fit for use. Nobody has done this yet, and the reasons are structural. Insurance underwriters price risk based on categories they understand, and &ldquo;software produced without human review, tested by AI against simulated services&rdquo; does not appear in any underwriting model. Investors would read novel warranty terms as voluntary assumption of liability. The legacy boilerplate persists because it limits exposure, satisfies insurers, and avoids alarming the board, not because it accurately describes the product.</p>
<p>The liability gap, the disclosure gap, and the contractual gap all point to the same underlying problem. Stuart Russell&rsquo;s AI alignment asks a deceptively simple question: when we build systems that optimize for the objectives we give them, have we preserved the ability to step in and correct course when those objectives turn out to be wrong? For the Software Factory, the answer is not yet and probably never. No regulatory framework addresses this mode of production at all. And the exponential adoption curve means the window for getting ahead of it is narrow. If StrongDM&rsquo;s approach spreads at the rate current trends suggest, Software Factories could be producing a significant share of commercial software within two years.</p>
<p>The Software Factory&rsquo;s greatest risk is not that agent-written code will be worse than human-written code. It may very well be better. The risk is that when it fails, nobody will know why. Nobody will know how to fix it. And the institutional knowledge required to understand the failure will have atrophied, because the humans stopped reading code years ago.</p>]]></content>
	<updated>2026-02-08T19:09:52+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-02-08T19:09:52+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="ai risk"/>

	<category term="artificial intelligence"/>

	<category term="eran kahana"/>

	<category term="strongdm"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-06:/279196</id>
	<link href="https://www.gautrais.com/conferences/technopolice-la-surveillance-policiere-a-lere-de-lintelligence-artificielle/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=technopolice-la-surveillance-policiere-a-lere-de-lintelligence-artificielle" rel="alternate" type="text/html"/>
	<title type="html">Félix Treguer / Benoit Dupont , Technopolice&amp;#160;: La surveillance policière à l&amp;#8217;ère de l&amp;#8217;intelligence artificielle, En ligne(6 février 2026)</title>
	<summary type="html"><![CDATA[<p>Vendredi 06 f&eacute;vrier 2026, 12h &ndash; 13h30 (Heure de Montr&eacute;al)&nbsp;
Cette s&eacute;rie de conf&eacute;rences portera ...</p>]]></summary>
	<content type="html"><![CDATA[<h4><strong>Vendredi 06 f&eacute;vrier 2026, 12h &ndash; 13h30 (Heure de Montr&eacute;al)&nbsp;</strong></h4>
<p>Cette s&eacute;rie de conf&eacute;rences portera sur des ouvrages r&eacute;cents et incontournables qui sont align&eacute;s avec les th&eacute;matiques de l&rsquo;axe Droit, cyberjustice et cybers&eacute;curit&eacute;. Ainsi, pour cette deuxi&egrave;me &eacute;dition du cercle de lecture de cette programmation, F&eacute;lix Tr&eacute;guer pr&eacute;sentera son ouvrage Technopolice: La surveillance polici&egrave;re &agrave; l&rsquo;&egrave;re de l&rsquo;intelligence artificielle&nbsp;(&Eacute;ditions Divergences) dans un premier temps puis, Beno&icirc;t Dupont y r&eacute;agira dans un second temps en tant que r&eacute;pondant.</p>
<h4><strong>&Agrave; propos de l&rsquo;ouvrage</strong></h4>
<p>Drones, logiciels pr&eacute;dictifs, vid&eacute;osurveillance algorithmique, reconnaissance faciale: le recours aux derni&egrave;res technologies de contr&ocirc;le se banalise au sein de la police. Loin de juguler la criminalit&eacute;, toutes ces innovations contribuent en r&eacute;alit&eacute; &agrave; amplifier la violence d&rsquo;&Eacute;tat. Elles referment nos imaginaires politiques et placent la ville sous contr&ocirc;le s&eacute;curitaire. C&rsquo;est ce que montre ce livre &agrave; partir d&rsquo;exp&eacute;riences et de savoirs forg&eacute;s au cours des luttes r&eacute;centes contre la surveillance polici&egrave;re. De l&rsquo;industrie de la s&eacute;curit&eacute; aux arcanes du minist&egrave;re de l&rsquo;int&eacute;rieur, de la CNIL au v&eacute;hicule de l&rsquo;officier en patrouille, il retrace les liens qu&rsquo;entretient l&rsquo;h&eacute;g&eacute;monie techno-solutionniste avec la d&eacute;rive autoritaire en cours.</p>
<h4><strong>&Agrave; propos de l&rsquo;auteur</strong></h4>
<p>F&eacute;lix Tr&eacute;guer est chercheur associ&eacute; au&nbsp;<a href="http://cis.cnrs.fr/" rel="noopener noreferrer" target="_blank">Centre Internet et Soci&eacute;t&eacute; du CNRS</a>&nbsp;et membre depuis 2009 de&nbsp;<a href="https://www.laquadrature.net/" rel="noopener noreferrer" target="_blank">La Quadrature du Net</a>, une association d&eacute;di&eacute;e &agrave; la d&eacute;fense des droits humains dans le contexte d&rsquo;informatisation.</p>
<p>Ses recherches combinent l&rsquo;histoire et la th&eacute;orie politiques, le droit ainsi que les &eacute;tudes des m&eacute;dias et technologies pour se pencher sur l&rsquo;histoire politique d&rsquo;Internet et de l&rsquo;informatique, les pratiques de pouvoir comme la surveillance et la censure, la gouvernementalit&eacute; algorithmique de la sph&egrave;re publique et, plus largement, la transformation num&eacute;rique de l&rsquo;&Eacute;tat et du domaine de la s&eacute;curit&eacute;.</p>
<p>Il a notamment travaill&eacute; au Berkman Klein Center for Internet &amp; Society de l&rsquo;universit&eacute; d&rsquo;Harvard, au Centre de recherches internationales de Sciences Po, &agrave; l&rsquo;Institut des Sciences de la Communication du CNRS. Fin 2021, il a &eacute;t&eacute; chercheur invit&eacute; au&nbsp;<a href="https://archive.ph/eEeEt" rel="noopener noreferrer" target="_blank">WZB Berlin Social Science Center.</a>&nbsp;et &agrave; l&rsquo;&eacute;t&eacute; 2024, &agrave; l&rsquo;<a href="https://itsrio.org/en/comunicados/result-for-the-its-global-policy-fellowship-program-2024/" rel="noopener noreferrer" target="_blank">Institut Technologie et Soci&eacute;t&eacute;</a>&nbsp;de Rio de Janeiro.</p>
<h4><strong>&Agrave; propos du r&eacute;pondant</strong></h4>
<p>Beno&icirc;t Dupont est, depuis 2016, titulaire de la&nbsp;Chaire de recherche du Canada en Cyber-r&eacute;silience. De 2006 &agrave; 2016, il fut titulaire de la&nbsp;Chaire de recherche du Canada en s&eacute;curit&eacute; et technologie. Il est professeur titulaire &agrave; l&rsquo;&Eacute;cole de criminologie de l&rsquo;Universit&eacute; de Montr&eacute;al et Directeur scientifique du&nbsp;R&eacute;seau int&eacute;gr&eacute; sur la cybers&eacute;curit&eacute; (<a href="https://www.serene-risc.ca/fr" rel="noopener noreferrer" target="_blank">SERENE-RISC</a>), qu&rsquo;il a fond&eacute; en 2014. Il si&egrave;ge &eacute;galement comme observateur&nbsp;repr&eacute;sentant le monde de la recherche sur le conseil d&rsquo;administration du&nbsp;Canadian Cyber Threat Exchange (<a href="https://cctx.ca/" rel="noopener noreferrer" target="_blank">CCTX</a>).</p>]]></content>
	<updated>2026-02-06T16:40:58+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-02-06T16:40:58+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-02-01:/278495</id>
	<link href="https://law.stanford.edu/2026/01/31/from-logging-to-hitl-locating-agent-controls-in-the-ai-life-cycle-core-principles-framework/" rel="alternate" type="text/html"/>
	<title type="html">From Logging to Transparency: Locating AI Agent Controls in the AI Life Cycle Core Principles Framework</title>
	<summary type="html"><![CDATA[<p>The AI Life Cycles Core Principles (AILCCP) framework* operates through a layered architecture. Thir...</p>]]></summary>
	<content type="html"><![CDATA[<p>The AI Life Cycles Core Principles (AILCCP) framework* operates through a layered architecture. Thirty-seven principles articulate what responsible AI systems must achieve. Controls specify how organizations implement those principles in practice. An AILCCP principle such as Safety declares (among other things) that AI systems must prevent harm across the application lifecycle. The controls beneath it, including Agent Kill Switch, Sandboxing, and Rate and Scope Limiter, provide the operational mechanisms through which Safety becomes enforceable.</p>
<p>Controls frequently serve multiple AILCCP principles. Sandboxing, for instance, implements both Safety and Security, because isolating an agent&rsquo;s execution environment simultaneously prevents harmful actions and resists adversarial exploitation. This cross-mapping reflects the structural reality that principles are analytically distinct but operationally entangled.</p>
<p>The table below demonstrates that common proposals for agent oversight, including logging, kill switches, sandboxing, rate limits, human-in-the-loop gates, and transparency requirements, already exist as named controls within the AILCCP framework. Each maps to one or more AILCCP principles that supply the normative justification for its deployment. For these mechanisms, the task is selection among existing controls based on the AILCCP principles most intensely activated by a given agent deployment. Where novel capabilities outpace current controls, the AILCCP framework accommodates additions through its versioning protocol.</p>
<table cellspacing="0" cellpadding="0">
<tbody>
<tr>
<td valign="top"><b>Mechanism</b></td>
<td valign="top"><b>AILCCP Control(s)</b></td>
<td valign="top"><b>Primary Principle(s)</b></td>
</tr>
<tr>
<td valign="top"><b>Logging</b><b></b></td>
<td valign="top">Real-time monitoring, Monitoring &amp; KPIs, Context-to-Output Lineage</td>
<td valign="top">Accountability, Safety, Security</td>
</tr>
<tr>
<td valign="top"><b>Kill switches</b><b></b></td>
<td valign="top">Agent Kill Switch</td>
<td valign="top">Human-Centered, Safety</td>
</tr>
<tr>
<td valign="top"><b>Sandboxing</b><b></b></td>
<td valign="top">Sandboxing, Agent tool allowlists and sandbox</td>
<td valign="top">Safety, Security</td>
</tr>
<tr>
<td valign="top"><b>Rate limits</b><b></b></td>
<td valign="top">Rate and Scope Limiter</td>
<td valign="top">Human-Centered, Safety, Robust</td>
</tr>
<tr>
<td valign="top"><b>Human-in-the-loop</b><b></b></td>
<td valign="top">Human Approval Gate for Sensitive Actions, HITL enforcement, Dual-Control for High-Risk Categories</td>
<td valign="top">Fundamental Rights, Human-Centered, Safety</td>
</tr>
<tr>
<td valign="top"><b>Transparency requirements</b><b></b></td>
<td valign="top">AI Fact Label, Provenance/CAI-C2PA pipeline, Evidence &amp; Disclosure Ledger</td>
<td valign="top">Transparency, Accountability</td>
</tr>
</tbody>
</table>
<p>* Here is the publicly-accessible version of the <a href="https://law.stanford.edu/2023/03/17/ai-life-cycle-core-principles/" rel="noopener noreferrer" target="_blank">AILCCP</a>.</p>]]></content>
	<updated>2026-01-31T23:48:17+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-01-31T23:48:17+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai controls"/>

	<category term="ai governance"/>

	<category term="ai safety"/>

	<category term="ai security"/>

	<category term="artificial intelligence"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-01-29:/278147</id>
	<link href="https://www.gautrais.com/conferences/the-coming-ai-hackers-bruce-schneier/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=the-coming-ai-hackers-bruce-schneier" rel="alternate" type="text/html"/>
	<title type="html">The Coming AI Hackers &amp;#8211; Bruce Schneier, Salle A-3502.1, campus MIL, Université de Montréal(29 janvier 2026)</title>
	<summary type="html"><![CDATA[<p>La Chaire L.R. Wilson est partenaire de cette conf&eacute;rence organis&eacute;e par le Centre international de cr...</p>]]></summary>
	<content type="html"><![CDATA[<p>La Chaire L.R. Wilson est partenaire de cette conf&eacute;rence organis&eacute;e par le Centre international de criminologie compar&eacute;e (CICC), qui accueillera Bruce Schneier, figure de renomm&eacute;e internationale en cybers&eacute;curit&eacute; et en gouvernance des technologies.</p>
<p>Dans cette conf&eacute;rence intitul&eacute;e&nbsp;<em>The Coming AI Hackers</em>, Bruce Schneier s&rsquo;int&eacute;resse &agrave; l&rsquo;&eacute;mergence des &laquo;&nbsp;hackers de l&rsquo;IA&nbsp;&raquo; &mdash; une nouvelle g&eacute;n&eacute;ration d&rsquo;attaques et de vuln&eacute;rabilit&eacute;s rendues possibles par l&rsquo;intelligence artificielle. Au-del&agrave; du piratage informatique classique, il &eacute;largit la notion de hacking &agrave; des syst&egrave;mes complexes tels que les cadres juridiques, les march&eacute;s financiers ou encore les institutions politiques, et interroge la capacit&eacute; de nos soci&eacute;t&eacute;s &agrave; anticiper et encadrer ces risques.</p>
<p>La conf&eacute;rence propose ainsi une r&eacute;flexion essentielle sur les enjeux de s&eacute;curit&eacute;, de gouvernance et de politiques publiques li&eacute;s au d&eacute;ploiement acc&eacute;l&eacute;r&eacute; de l&rsquo;IA, &agrave; un moment o&ugrave; ces technologies transforment en profondeur les &eacute;quilibres sociaux, &eacute;conomiques et institutionnels.</p>]]></content>
	<updated>2026-01-29T22:19:00+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-01-29T22:19:00+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-01-29:/278138</id>
	<link href="https://law.stanford.edu/2026/01/29/a-paralysis-prescription/" rel="alternate" type="text/html"/>
	<title type="html">A Paralysis Prescription</title>
	<summary type="html"><![CDATA[<p>A Paralysis Prescription
I review pretty much everything related to AI governance. It&rsquo;s central to m...</p>]]></summary>
	<content type="html"><![CDATA[<p><b>A Paralysis Prescription</b></p>
<p>I review pretty much everything related to AI governance. It&rsquo;s central to my work at Stanford. It&rsquo;s also central to what I call the &ldquo;<a href="https://law.stanford.edu/2023/03/17/ai-life-cycle-core-principles/" rel="noopener noreferrer" target="_blank">AI Life Cycle Core Principles</a>&rdquo; framework, which is now almost three years in the making. But governance, which, at a high level, stands for the sum of the organization&rsquo;s policies, procedures, processes, and practices relative to AI, didn&rsquo;t start as a central principle. It was just one of 37 principles that coalesced over time into what it takes to properly and efficiently deal with AI. But Governance (AILCCP principles appear in uppercase) rather quickly claimed its prominence. The more work I did on the framework, the more I saw that without proper alignment with the Governance principle the organization&rsquo;s prospects of successful AI implementation are bleak. Now, it&rsquo;s the single most important principle.</p>
<p>And this brings me to the present issue. Without naming names (I have no interest in stirring discomfort) I want to shed light on a common practice among consultants when it comes to talking about AI governance. (And yes, I am using the term in lowercase on purpose.)<span>&nbsp;</span></p>
<p>I recently came across a white paper published by what I&rsquo;ll call the &ldquo;Acme Group.&rdquo; The paper is actually quite similar to many others you will find out there. So, my comments here should be helpful beyond just this example. And what you will see fairly quickly is that the advice, as well-intentioned as it may be, is a prescription for paralysis.</p>
<p>Acme Group seeks to advance its AI consulting expertise. It prescribes at least seven distinct committee or oversight structures:</p>
<ol>
<li>Generative AI Governance Committee (cross-functional)</li>
<li>Ethics Committee or Advisory Board</li>
<li>Technology Assessment processes</li>
<li>Post-Implementation Review bodies</li>
<li>Reputation Response Team</li>
<li>Innovation Labs Oversight</li>
<li>Various &ldquo;cross-functional teams&rdquo; for specific controls</li>
</ol>
<p>Each of these bodies requires membership, meeting cadences, documentation, and presumably the authority to delay or block AI implementation initiatives. The cumulative effect is decision diffusion: accountability is dispersed across so many nodes that no single entity possesses the authority or incentive to say &ldquo;yes.&rdquo;</p>
<p><b>Three Dimensions of the Critique</b></p>
<p><b>1. Velocity Degradation</b></p>
<p>Every committee represents a synchronization point. If a GenAI initiative must secure approval from governance, ethics, technology assessment, and post-implementation review bodies (each meeting monthly or quarterly), the calendar arithmetic alone suggests multi-month latency for even modest deployments.</p>
<p>Acme Group advises securing stakeholder approval before deployment. Sounds reasonable, or at the very least, harmless, right? It&rsquo;s not. When multiplied across committees, this requirement becomes a veto chain where any single objection halts progress. Too many parties holding blocking rights inevitably results in underutilization of the resource (in this case, the desired AI capability).</p>
<p><b>2. Accountability Diffusion</b></p>
<p>The proliferation of oversight bodies paradoxically undermines the very accountability the framework purports to establish. When seven committees review a decision, who bears responsibility for its consequences? The AILCCP principle of Accountability requires, among other things, that &ldquo;output is traceable to appropriate responsible party.&rdquo; Yet the Acme Group structure distributes decision rights so broadly that traceability becomes an exercise in finger-pointing.</p>
<p>When responsibility is shared, individual ownership diminishes. Committee members assume others have conducted rigorous review, resulting in superficial approval from all quarters and genuine scrutiny from none.</p>
<p><b>3. Performative Governance</b></p>
<p>Perhaps the most pointed critique: such committee structures can become governance theater. Organizations form committees to their heart&rsquo;s content, draft policies, and create gigabytes of documentation. Window dressing. The appearance of oversight; zero substance.</p>
<p>This misguided approach generates an organization that is optimized for demonstrating diligence rather than exercising judgment. It&rsquo;s a prescription for paralysis.</p>
<p><b>Aligning with the Governance Principle</b></p>
<p>The Governance principle does not dismiss the importance of forming committees, drafting policies, and creating documentation to promote deliberation. An organization that aligns with this principle will find that it calls for properly calibrating effort to the context of its operations; it&rsquo;s not one-size-fits-all. An organization operating in healthcare or financial services might very well benefit from regimes that intentionally introduce friction. But the emphasis here is that this must be the product of thoughtful intentionality. The decision to create this type of structure renders deliberation a feature, not a bug.</p>]]></content>
	<updated>2026-01-29T21:35:10+00:00</updated>
	<author><name>Eran Kahana</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2026-01-29T21:35:10+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="ai governance"/>

	<category term="artificial intelligence"/>

	<category term="eran kahana"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-01-29:/278067</id>
	<link href="https://law.stanford.edu/2026/01/28/are-neural-organoids-part-of-neurotechnology/" rel="alternate" type="text/html"/>
	<title type="html">Are Neural Organoids Part of “Neurotechnology”?</title>
	<summary type="html"><![CDATA[<p>November 2025 was a big moment for the governance of two very similar things: neurotechnology and ne...</p>]]></summary>
	<content type="html"><![CDATA[<p>November 2025 was a big moment for the governance of two very similar things: neurotechnology and neural organoids. On November 5, the United Nations Educational, Scientific and Cultural Organization (UNESCO) formalized an international instrument providing a Recommendation on the Ethics of Neurotechnology, after member states voted to support it.[1] While legally nonbinding, the UNESCO instrument adds to a growing list of national and international efforts to set new law about &ldquo;neurotechnologies,&rdquo; such as a similar instrument codified by the Organisation for Economic Co-operation and Development (OECD) in 2019.[2]</p>
<p>The next day, on November 6, an opinion paper was published in <em>Science</em> detailing the consensus of experts across the natural and social sciences, law, and bioethics, that &ldquo;neural organoids&rdquo; require governance interventions involving, at minimum, a global monitoring system for the burgeoning field and its applications.[3] The following week, a group of experts and stakeholders met at the Asilomar Conference Grounds in California to discuss such issues around these neural organoids.[4]</p>
<p>Notably, though, the term &ldquo;neurotechnology&rdquo; does not appear anywhere in the article about neural organoids, nor in news coverage of the Asilomar event. Nor does the term &ldquo;organoid&rdquo; appear in either the UNESCO or OECD instruments.</p>
<p>In fact, it is difficult to tell whether these two objects are one and the same, or different, and on what grounds. Neither the nascent law of neurotechnologies nor the scientific discourse on neural organoids provide sufficient clarify on this question. This blog explores the sources of this uncertainty and identifies how issues in legal predictability or policy mismatches may flow from the absence of clarity.</p>
<h3><strong>Neural Organoids</strong></h3>
<p>A budding scientific field has been working with new &ldquo;organoid&rdquo; techniques to help model and better study neural tissue <em>in vitro</em>&mdash;meaning in the lab, outside of the body. Organoids are &ldquo;organ-like,&rdquo; but are very small and much less sophisticated than the actual organs they are based on, such as the brain. Scientists can use these small clumps of brain cells to understand how parts of the brain work or develop, or even to study how infections or other diseases may affect the brain and what treatments they may respond to.[5] The field has notable promise for scientific and clinical applications, but multiple technical challenges remain before that promise can be fully realized.[6]</p>
<p>However, describing what these nascent technologies are and what they are for is difficult. The field is still struggling to communicate internally and to the public about neural organoids. A notable perspective paper was published in <em>Nature</em> in 2022 by a group of organoid researchers trying to issue terminological norms for their field.[7] The paper also strongly objects to several ways these innovations have been described in journalistic media, such as &ldquo;mini-brain&rdquo; or &ldquo;brain-in-a-dish,&rdquo; but provides no compelling alternative or easier metaphor to use. This intervention illustrates that progress is being made in institutionalizing and standardizing the field of neural organoids&mdash;but also just how recently that work has commenced and how much difficulty there is in communicating to actors outside of the scientific field about what these innovations are.</p>
<h3><strong>What Are Neurotechnologies? Do Neural Organoids Count?</strong></h3>
<p>The term neurotechnology has begun to acquire a legal definition in national and international lawmaking efforts. In the UNESCO Recommendation, for instance, &ldquo;[n]eurotechnology refers currently to devices, systems and procedures&ndash;encompassing both hardware and software&ndash;that directly measure, access, monitor, analyse, predict or modulate the nervous system to understand, influence, restore or anticipate its structure, activity and function.&rdquo;[1] This provides a sweeping scope for the object of regulation, but does not specifically mention organoids.</p>
<p>The OECD&rsquo;s legal instrument has a very similar definition to UNESCO, raising similar questions about whether neural organoids fit.[2] In the US, at the federal level, legislation was recently introduced in the Senate that would call on the Federal Trade Commission (FTC) to investigate whether and how &ldquo;neurotechnology&rdquo; raises data protection issues. The proposed MIND Act would contain a definition of neurotechnology that also very closely mirrors the UNESCO and OECD instruments, providing no greater clarity.[8]</p>
<p>While the term &ldquo;neurotechnology&rdquo; appears to have a very broad definition in emerging legal instruments, lawmakers generally use the term to mean something much narrower. The focus of lawmaking bodies and the experts consulting with them is predominantly on physical devices with software components&mdash;primarily, and sometimes exclusively, on &ldquo;brain-computer interfaces&rdquo; (BCIs) or brain stimulation devices.[9]</p>
<p>But neural organoids could still potentially fit into these instruments. The UNESCO definition, by including the clarifying clause &ldquo;encompassing both hardware and software,&rdquo; does appear to suggest that the definition applies to physical and digital devices. But its interpretation would depend on whether the clause is read to be the full extent of what &ldquo;devices, systems, and procedures&rdquo; can mean, or merely two examples of what those larger terms can be. Of note, the UNESCO instrument goes on to provide examples of neurotechnology that do focus on devices, but is proceeded by the caveat that &ldquo;[n]eurotechnology includes, but is not limited to:&rdquo;.[1] These elements of the instrument illustrate the lack of complete clarity in its scope.</p>
<p>Within the UNESCO instrument, at least, it is not wholly unreasonable that neural organoids could be interpreted to be &ldquo;systems and procedures . . . that directly . . . analyse, predict, or modulate the nervous system,&rdquo; especially to &ldquo;understand [or] anticipate its structure, activity and function.&rdquo;[1] Whether the UNESCO instrument applies exclusively to physical and digital devices or could apply to biological objects appears unclear at present.</p>
<h3><strong>The Potential for Legal Unpredictability or Mismatches</strong></h3>
<p>Where does this leave the governance of neural organoids? It is currently not clear, legally or scientifically, whether neural organoids are a part of neurotechnology. National and global rules are being set for neurotechnology, but not specifically for organoids.</p>
<p>This prompts the question: Do rules for neurotechnology apply to neural organoids?</p>
<p>When looking at current legal definitions, it looks likely&mdash;but not certain&mdash;that those rules do <em>not</em> apply to organoids. Yet, lawmaking bodies, regulators, or courts appear to have at least some interpretive room that could be used to position neural organoids as neurotechnology, and therefore subject to at least some of these new rules or norm-setting processes on neurotechnology. At the same time, the scientific field of neural organoids is still in the process of organizing itself and has had trouble communicating to external actors about what they do and why.[7] This lack of clarity within the scientific field itself, while understandable given its youth, may invite or enable decision-makers to more readily engulf neural organoids within neurotechnology rules.</p>
<p>This lack of legal and scientific clarity could create predictability issues, since organoid scientists and product developers may not know whether any of the emerging rules on neurotechnology will apply to their activities. It could also raise concerns for funders and investors, or even insurers for organoid-based therapeutics, who may or may not want to see compliance with those rules as a condition of supporting neural organoid work.</p>
<p>The definitional uncertainty also raises questions about whether neurotechnology rules are fit and appropriate for neural organoids, or if applying those rules could result in policy mismatches. Most of the rules about neurotechnology that have been set have mostly or only medical and consumer <em>devices</em> in mind&mdash;not <em>biological</em> innovations like organoids.[9] Applying those rules to neural organoids without thoughtfully considering whether and how they should be adapted to biologics could result in poor match between the goals of those rules and the issues that organoids may pose.</p>
<h3><strong>Clarifying Definitions, For Now</strong></h3>
<p>Since most existing and emerging legal instruments on neurotechnologies appear to only have medical and consumer devices in mind, this blog tentatively recommends that lawmaking and regulatory bodies should consider clarifying that these rules do not apply to neural organoids&mdash;at least, for now. At minimum, clarifying that neurotechnology rules do not <em>currently</em> apply to neural organoids would provide short-to-medium-term predictability to scientists and other stakeholders and avoid applying rules that may not be fit-for-purpose.</p>
<p>Since definitions are vague, it is unlikely that this path would require full amendments of these legal instruments. Clarification could instead come in the form of formal guidance issued by governing bodies, or less formally, in the form of public statements from officials about whether the implementation of those rules would cover neural organoids.</p>
<p>It may, however, be prudent to not close the door entirely to these legal instruments applying to neural organoids at some point in the future. While the neural organoid field would benefit from predictability, an international and interdisciplinary group of experts has agreed it is likely that governance for these emerging technologies will be required at some point.[3] Custom rules for neural organoids may be preferable, but having governance instruments for neurotechnologies available in reserve may be valuable if no such tailored rules arise.</p>
<h3><strong>References</strong></h3>
<p>[1] UNESCO, Recommendation on the Ethics of Neurotechnology (2025).</p>
<p>[2] OECD, Recommendation of the Council on Responsible Innovation in Neurotechnology, OECD/LEGAL/0457 (2019).</p>
<p>[3] Sergiu P. Pa&#537;ca, et al., <em>The Need for a Global Effort to Attend to Human Neural Organoid and Assembloid Research</em>, 390 Science 574 (2025).</p>
<p>[4] Mitch Leslie, <em>Lab-Grown Models of Human Brains are Advancing Rapidly. Can Ethics Keep Pace?</em>, Science (Nov. 18, 2025), https://www.science.org/content/article/lab-grown-models-human-brains-are-advancing-rapidly-can-ethics-keep-pace.</p>
<p>[5] H. Isaac Chen, Hongjun Song &amp; Guo&#8208;li Ming, <em>Applications of Human Brain Organoids to Clinical Problems</em>, 248 Developmental Dynamics 53 (2019).</p>
<p>[6] Madeline G. Andrews &amp; Arnold R. Kriegstein, <em>Challenges of Organoid Research</em>, 45 Annual Review of Neuroscience 23 (2022).</p>
<p>[7] Sergiu P. Pa&#537;ca, et al., <em>A Nomenclature Consensus for Nervous System Organoids and Assembloids</em>,&nbsp;609 Nature&nbsp;907 (2022).</p>
<p>[8] MIND Act of 2025, S.2925, 119th Cong. &sect;3(5) (2025).</p>
<p>[9] Walter G. Johnson, <em>It&rsquo;s (Not) Just Semantics: &ldquo;Neurotechnology&rdquo; as a Novel Space of Transnational Law</em>, 50 Law &amp; Social Inquiry 865 (2025).</p>]]></content>
	<updated>2026-01-29T00:13:10+00:00</updated>
	<author><name>Walter Johnson</name></author>
	<source>
		<id>https://law.stanford.edu/blog/lawandbiosciences/</id>
		<link rel="self" href="https://law.stanford.edu/blog/lawandbiosciences/"/>
		<updated>2026-01-29T00:13:10+00:00</updated>
		<title>Law and Biosciences Blog - Stanford Law School</title></source>

	<category term="asilomar"/>

	<category term="bioethics"/>

	<category term="health law"/>

	<category term="international law"/>

	<category term="neuroscience"/>

	<category term="neurotechnology"/>

	<category term="organoids"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-01-28:/277987</id>
	<link href="https://www.gautrais.com/presse/sonnettes-intelligentes-contre-le-crime/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=sonnettes-intelligentes-contre-le-crime" rel="alternate" type="text/html"/>
	<title type="html">Sonnettes intelligentes contre le crime (La Presse, 27 janvier 2026)</title>
	<summary type="html"><![CDATA[<p>L&rsquo;utilit&eacute; des registres de cam&eacute;ras pour les enqu&ecirc;tes polici&egrave;res ne fait pas de doute. Mais pourraien...</p>]]></summary>
	<content type="html"><![CDATA[<p>L&rsquo;utilit&eacute; des registres de cam&eacute;ras pour les enqu&ecirc;tes polici&egrave;res ne fait pas de doute. Mais pourraient-ils compromettre le droit &agrave; la vie priv&eacute;e des citoyens&nbsp;?&nbsp;<em>La&nbsp;Presse</em>&nbsp;a demand&eacute; &agrave; deux avocats leur avis.</p>
<h4><a href="https://www.lapresse.ca/actualites/trois-rivieres/sonnettes-intelligentes-contre-le-crime/2026-01-27/qr/que-disent-les-experts-en-vie-privee.php" rel="noopener noreferrer" target="_blank">Pour en savoir +</a></h4>]]></content>
	<updated>2026-01-28T03:46:47+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-01-28T03:46:47+00:00</updated>
		<title>Vincent Gautrais</title></source>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-01-26:/277889</id>
	<link href="https://www.gautrais.com/blogue/2026/01/26/42-millions-deuros-damende-le-prix-a-payer-pour-manquement-au-rgpd/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=42-millions-deuros-damende-le-prix-a-payer-pour-manquement-au-rgpd" rel="alternate" type="text/html"/>
	<title type="html">42 millions d’euros d’amende&amp;#160;: le prix à payer pour manquement au RGPD&amp;#160;?</title>
	<summary type="html"><![CDATA[<p>Lola Gregorowius est &eacute;tudiante dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)&nbsp;...</p>]]></summary>
	<content type="html"><![CDATA[<p><strong><a href="https://www.gautrais.com/files/sites/185/2026/01/LolaG.jpg" rel="noopener noreferrer" target="_blank"><img decoding="async" src="https://www.gautrais.com/files/sites/185/2026/01/LolaG-475x610.jpg" alt="" srcset="https://www.gautrais.com/files/sites/185/2026/01/LolaG-475x610.jpg 475w,https://www.gautrais.com/files/sites/185/2026/01/LolaG-725x931.jpg 725w,https://www.gautrais.com/files/sites/185/2026/01/LolaG.jpg 739w,https://www.gautrais.com/files/sites/185/2026/01/LolaG-475x610.jpg 475w,https://www.gautrais.com/files/sites/185/2026/01/LolaG-725x931.jpg 725w,https://www.gautrais.com/files/sites/185/2026/01/LolaG.jpg 739w" sizes="(max-width: 167px) 100vw, 167px" referrerpolicy="no-referrer" loading="lazy"></a>Lola Gregorowius est &eacute;tudiante dans le cadre du cours DRT6929 (Vie priv&eacute;e + Num&eacute;rique) (Hiver 2026)&nbsp;&nbsp;</strong></p>
<p>Le 13 janvier 2026, la CNIL (<em>Commission nationale de l&rsquo;informatique et des libert&eacute;s</em>) a condamn&eacute; les entreprises fran&ccedil;aises FREE et FREE Mobile &agrave; hauteur de 42 millions d&rsquo;euros d&rsquo;amende (<em>environ 67771794.9 CAD</em>) pour manquement au RGPD (<em>le r&egrave;glement g&eacute;n&eacute;ral sur la protection des donn&eacute;es</em>).</p>
<p>Pour rappel, le RGPD, entr&eacute; en vigueur en 2018, est un r&egrave;glement europ&eacute;en qui encadre le traitement des donn&eacute;es personnelles au sein de l&rsquo;Union Europ&eacute;enne. Il s&rsquo;applique aux organismes publics et priv&eacute;s qui traitent des donn&eacute;es personnelles.</p>
<p>La CNIL est une autorit&eacute; administrative ind&eacute;pendante qui est charg&eacute;e de veiller &agrave; la protection des donn&eacute;es personnelles. Elle a un pouvoir d&rsquo;investigation, de contr&ocirc;le et de sanction en cas de manquement.</p>
<p>Pour information, l&rsquo;entreprise fran&ccedil;aise <a href="https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000037856073/" rel="noopener noreferrer" target="_blank">Bouygues Telecom avait &eacute;t&eacute; condamn&eacute;e en 2018 par la CNIL pour manquement &agrave; la loi Informatique et Libert&eacute;s</a>. Le montant de l&rsquo;amende s&rsquo;&eacute;levait &agrave;&nbsp; 250 000 euros (<em>environ 405324,75 CAD</em>) seulement. La sanction prononc&eacute;e &agrave; l&rsquo;encontre <a href="https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000053352643" rel="noopener noreferrer" target="_blank">FREE </a>et<a href="https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000053352664" rel="noopener noreferrer" target="_blank"> FREE Mobile</a> n&rsquo;est donc pas anodine et montre que la protection des donn&eacute;es personnelles a pris une place importante ces derni&egrave;res ann&eacute;es.</p>
<p>Nous verrons dans un premier temps le contexte de cette d&eacute;cision, puis nous analyserons son contenu et enfin sa port&eacute;e.</p>
<ol>
<li>
<h2>Le contexte</h2>
</li>
</ol>
<h4>A. Un piratage sans pr&eacute;c&eacute;dent</h4>
<p>Les entreprises FREE et FREE Mobile sont des op&eacute;rateurs de t&eacute;l&eacute;communication fran&ccedil;ais. Ils repr&eacute;sentent une part importante du march&eacute; en la mati&egrave;re. Ce g&eacute;ant de la t&eacute;l&eacute;communication est n&eacute; en 1999, avant le RGPD qui date de 2018. Ils ont donc d&ucirc; s&rsquo;adapter &agrave; son arriv&eacute;e et revoir leur syst&egrave;me de protection des donn&eacute;es, ce qui n&rsquo;a pas &eacute;t&eacute; de tout repos.</p>
<p>En effet, ce n&rsquo;est pas la premi&egrave;re fois que cette entreprise est au c&oelig;ur de pol&eacute;miques concernant la protection des donn&eacute;es de ses utilisateurs. D&eacute;j&agrave; en <a href="https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000044810599?isSuggest=true" rel="noopener noreferrer" target="_blank">2021, </a>la CNIL l&rsquo;avait condamn&eacute; &agrave; une amende de 300 000 euros, puis en <a href="https://www.delsolavocats.com/La-CNIL-prononce-une-sanction-de-300-000-euros-a-l-encontre-de-la-societe-FREE" rel="noopener noreferrer" target="_blank">2022 </a>avec le m&ecirc;me montant d&rsquo;amende. Dans les deux cas, c&rsquo;&eacute;tait un manquement au RGPD dont il &eacute;tait question. On comprend ainsi que ce n&rsquo;est pas la premi&egrave;re fois que l&rsquo;entreprise FREE d&eacute;passe les limites l&eacute;gales en termes de protection des donn&eacute;es personnelles de ses abonn&eacute;s.</p>
<p>Alors pourquoi la CNIL a-t-elle d&eacute;cid&eacute; d&rsquo;augmenter autant le montant de l&rsquo;amende cette fois-ci&nbsp;?</p>
<p>En octobre 2024, un hacker web est parvenu &agrave; s&rsquo;infiltrer dans le syst&egrave;me informatique couvrant les donn&eacute;es personnelles des clients de FREE. <strong>24 millions de contrats d&rsquo;abonn&eacute;es </strong>au service ont &eacute;t&eacute; touch&eacute;s. C&rsquo;est l&rsquo;une des <strong>plus grosses fuites de donn&eacute;es </strong>enregistr&eacute;es durant l&rsquo;ann&eacute;e 2024 en France. On sait aujourd&rsquo;hui que toutes ces donn&eacute;es ont &eacute;t&eacute; revendues.</p>
<p>A la suite de cet incident, plus de <strong>2 000 plaintes </strong>d&rsquo;utilisateurs ont &eacute;t&eacute; d&eacute;pos&eacute;es aupr&egrave;s de la CNIL.</p>
<p>Les renseignements d&rsquo;un r&eacute;seau aussi important sont tr&egrave;s sensibles, il est question ici de donn&eacute;es bancaires comme des IBAN par exemple. L&rsquo;autorit&eacute; administrative est alors de nouveau intervenue. Apr&egrave;s plus d&rsquo;une ann&eacute;e d&rsquo;enqu&ecirc;te, la CNIL a relev&eacute; <a href="https://www.cnil.fr/fr/sanction-free-2026" rel="noopener noreferrer" target="_blank">plusieurs manquements au RGPD</a> de la part de FREE et FREE Mobile.</p>
<p>Ce qui nous am&egrave;ne &agrave; la sanction tr&egrave;s lourde rendue par la formation restreinte de la CNIL.</p>
<h4>B. Une sanction lourde de sens</h4>
<p>Depuis 2024, le nombre de fuites de donn&eacute;es en France a augment&eacute; de <strong>20%</strong> par rapport &agrave; l&rsquo;ann&eacute;e 2023 (<a href="https://www.cnil.fr/fr/violations-massives-de-donnees-en-2024-quels-sont-les-principaux-enseignements-mesures-a-prendre" rel="noopener noreferrer" target="_blank">selon les chiffres de la CNIL</a>). Les chiffres de l&rsquo;ann&eacute;e 2025 ne sont pas encore sortis, en revanche on sait que la France est un des pays les plus touch&eacute;s par les fuites de donn&eacute;es au sein de l&rsquo;Union Europ&eacute;enne.</p>
<p>La CNIL s&rsquo;est exprim&eacute;e face &agrave; cette situation&nbsp;:</p>
<blockquote><p>&ldquo;[&hellip;] l&rsquo;analyse des diff&eacute;rentes phases des violations r&eacute;v&egrave;le qu&rsquo;une succession de d&eacute;fauts de s&eacute;curit&eacute; courants ont permis &agrave; l&rsquo;attaquant de passer d&rsquo;une &eacute;tape &agrave; la suivante.&rdquo;</p></blockquote>
<p>Un grand nombre de fuites serait donc li&eacute; &agrave; des <strong>d&eacute;fauts du syst&egrave;me de s&eacute;curit&eacute;</strong> <strong>de la part des entreprises. </strong></p>
<p>En sachant cela, la CNIL a condamn&eacute; FREE en consid&eacute;rant qu&rsquo;elle avait largement les moyens de prot&eacute;ger les donn&eacute;es de ses utilisateurs, d&rsquo;autant plus au regard de la sensibilit&eacute; de ces derni&egrave;res. En d&eacute;cidant de sanctionner s&eacute;v&egrave;rement l&rsquo;entreprise de t&eacute;l&eacute;communication, elle a fait d&rsquo;elle un exemple pour les autres entreprises qui traitent de donn&eacute;es personnelles.</p>
<p>Il nous faut &agrave; pr&eacute;sent analyser la d&eacute;cision afin de mieux comprendre les enjeux autour de cette sanction in&eacute;dite.</p>
<h2>2. L&rsquo;analyse de la sanction de la CNIL</h2>
<p>La CNIL a rep&eacute;r&eacute; plusieurs manquements au RGPD qui lui ont permis de fonder son enqu&ecirc;te et plus tard sa d&eacute;cision. Elle montre aux autres grandes entreprises d&eacute;tentrices de donn&eacute;es personnelles, que <strong>le RGPD n&rsquo;est pas une option.&nbsp;</strong></p>
<h4>A. Un manquement &agrave; l&rsquo;obligation d&rsquo;assurer la s&eacute;curit&eacute; des donn&eacute;es personnelles (<a href="https://www.cnil.fr/fr/reglement-europeen-protection-donnees/chapitre4#Article32" rel="noopener noreferrer" target="_blank">article 32 du RGPD</a>)</h4>
<p>L&rsquo;organisme administratif consid&egrave;re que le syst&egrave;me de s&eacute;curit&eacute; mis en place par FREE &eacute;tait inefficace et n&rsquo;assurait pas la confidentialit&eacute; des donn&eacute;es personnelles. Elle rappelle qu&rsquo;on ne peut &eacute;liminer tout risque, mais qu&rsquo;il faut tenter de le r&eacute;duire au maximum et elle a enjoint l&rsquo;entreprise &agrave; continuer ses efforts.</p>
<h4>B. Un manquement &agrave; l&rsquo;obligation de communiquer aupr&egrave;s des personnes concern&eacute;es par la violation de donn&eacute;es (<a href="https://www.cnil.fr/fr/reglement-europeen-protection-donnees/chapitre4#Article34" rel="noopener noreferrer" target="_blank">article 34 du RGPD</a>)</h4>
<p>L&rsquo;obligation de communiquer la violation de donn&eacute;es aupr&egrave;s des personnes concern&eacute;es &agrave; &eacute;t&eacute; partiellement r&eacute;alis&eacute;e par les entreprises FREE et FREE Mobile. Notamment gr&acirc;ce &agrave; un courriel d&rsquo;information, ainsi que par un num&eacute;ro vert et une section interne d&eacute;di&eacute;e &agrave; la gestion des demandes.</p>
<p>Toutefois, la CNIL a consid&eacute;r&eacute; que ce n&rsquo;&eacute;tait pas suffisant&nbsp;:</p>
<blockquote><p>&ldquo;&nbsp;[&hellip;] le courriel adress&eacute; ne comportait pas toutes les informations n&eacute;cessaires [&hellip;], en estimant que ces omissions ne permettaient notamment pas aux personnes concern&eacute;es de comprendre directement les cons&eacute;quences de la violation [&hellip;].&rdquo;</p></blockquote>
<h4>C. Un manquement &agrave; l&rsquo;obligation de conserver les donn&eacute;es personnelles pendant une dur&eacute;e limit&eacute;e (<a href="https://www.cnil.fr/fr/reglement-europeen-protection-donnees/chapitre2#Article5" rel="noopener noreferrer" target="_blank">article 5-1-e) du RGPD</a>)</h4>
<p>L&rsquo;article 5-1-e du RGPD pr&eacute;voit une obligation de conserver des donn&eacute;es personnelles pendant une dur&eacute;e limit&eacute;e. La soci&eacute;t&eacute; FREE Mobile a conserv&eacute; des millions des donn&eacute;es de ses abonn&eacute;s, sans justification, et pendant une dur&eacute;e allant au-del&agrave; de celle autoris&eacute;e. La CNIL enjoint &agrave; l&rsquo;entreprise de trouver un syst&egrave;me de tri qui permettra de stopper le stockage de donn&eacute;es sensibles qui appara&icirc;t comme non-n&eacute;cessaire.</p>
<h2>3. Les effets de la sanction de la CNIL</h2>
<p>Au regard de cette lourde condamnation et des derni&egrave;res sanctions prononc&eacute;es par la CNIL, on peut conclure que l&rsquo;organisme souhaite durcir son contr&ocirc;le et son impact sur les entreprises fran&ccedil;aises. Face &agrave; des cyber-attaques toujours plus nombreuses et ing&eacute;nieuses, il est n&eacute;cessaire d&rsquo;avoir des syst&egrave;mes de protection adapt&eacute;s et efficaces. Concernant cet enjeu, je vous renvoie au <a href="https://www.donneespersonnelles.fr/sanctions-rgpd-2025-priorites-cnil" rel="noopener noreferrer" target="_blank">papier de Ma&icirc;tre </a><a href="https://www.donneespersonnelles.fr/sanctions-rgpd-2025-priorites-cnil" rel="noopener noreferrer" target="_blank">Thi&eacute;baut Devergranne</a> qui d&eacute;crypte les nouvelles priorit&eacute;s de la CNIL.</p>
<p>On peut alors se demander si cette augmentation du montant d&rsquo;amende va s&rsquo;appliquer dans les prochaines affaires de fuites de donn&eacute;es personnelles ou si FREE est un cas &agrave; part.</p>
<p>Pour conclure, FREE et FREE Mobile ont r&eacute;pondu &agrave; la sanction de la CNIL en d&eacute;posant un recours devant le Conseil d&rsquo;Etat fran&ccedil;ais. Ils estiment le montant de l&rsquo;amende disproportionn&eacute; aux atteintes commises, notamment au regard d&rsquo;anciennes affaires similaires.</p>
<p>Entre une remise en cause des sanctions toujours plus lourdes de la CNIL, une augmentation des fuites de donn&eacute;es personnelles, des syst&egrave;mes de s&eacute;curit&eacute; inefficaces, les choix de la CNIL seront d&eacute;cisifs pour les ann&eacute;es &agrave; venir en mati&egrave;re de protection des renseignements personnels.</p>
<p>&nbsp;</p>]]></content>
	<updated>2026-01-26T16:10:13+00:00</updated>
	<author><name>Vincent Gautrais</name></author>
	<source>
		<id>https://www.gautrais.com</id>
		<link rel="self" href="https://www.gautrais.com"/>
		<updated>2026-01-26T16:10:13+00:00</updated>
		<title>Vincent Gautrais</title></source>

	<category term="cours"/>

	<category term="mes étudiant-e-s"/>


</entry>

<entry>
	<id>tag:vifa-recht.de,2026-01-22:/277619</id>
	<link href="https://law.stanford.edu/2025/12/20/stanford-computational-antitrust-a-year-in-review-2025/" rel="alternate" type="text/html"/>
	<title type="html">Stanford Computational Antitrust – A Year in Review 2025</title>
	<summary type="html"><![CDATA[<p>We are pleased to share the 2025 Year in Review of the Stanford Computational Antitrust project. We ...</p>]]></summary>
	<content type="html"><![CDATA[<p>We are pleased to share the 2025 Year in Review of the Stanford Computational Antitrust project. We reflect on another year of consolidation and international reach for the project.</p>
<p>In 2025, the Project further strengthened its role as a global reference point for research at the intersection of competition law, economics, and computational methods, with a particular focus on how antitrust enforcement is adapting to data-intensive markets and artificial intelligence.</p>
<p><a href="https://law.stanford.edu/publications/stanford-computational-antitrust/" rel="noopener noreferrer" target="_blank">Read full report</a></p>]]></content>
	<updated>2025-12-20T22:38:55+00:00</updated>
	<author><name>Stanford Computational Antitrust Project</name></author>
	<source>
		<id>https://law.stanford.edu/blog/codex/</id>
		<link rel="self" href="https://law.stanford.edu/blog/codex/"/>
		<updated>2025-12-20T22:38:55+00:00</updated>
		<title>CodeX - Stanford Law School</title></source>

	<category term="computational antitrust"/>


</entry>


</feed>
<!-- vim:ft=xml
	  -->
