<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Trials & Errors]]></title><description><![CDATA[Various news, thoughts, and findings on the intersection of law, policy, and artificial intelligence.]]></description><link>https://www.trialserrors.ai</link><generator>Substack</generator><lastBuildDate>Sun, 10 May 2026 18:12:03 GMT</lastBuildDate><atom:link href="https://www.trialserrors.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Peter Henderson]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[trialserrors@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[trialserrors@substack.com]]></itunes:email><itunes:name><![CDATA[Peter Henderson]]></itunes:name></itunes:owner><itunes:author><![CDATA[Peter Henderson]]></itunes:author><googleplay:owner><![CDATA[trialserrors@substack.com]]></googleplay:owner><googleplay:email><![CDATA[trialserrors@substack.com]]></googleplay:email><googleplay:author><![CDATA[Peter Henderson]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Can NeurIPS Be Required to Reject Papers from Sanctioned Institutions under the First Amendment? ]]></title><description><![CDATA[Some quick thoughts and background on the state of the First Amendment.]]></description><link>https://www.trialserrors.ai/p/can-neurips-be-required-to-reject</link><guid isPermaLink="false">https://www.trialserrors.ai/p/can-neurips-be-required-to-reject</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Thu, 26 Mar 2026 23:21:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CtGC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>NeurIPS has <a href="https://neurips.cc/Conferences/2026/MainTrackHandbook">announced</a> that it will not accept or publish submissions from OFAC-sanctioned institutions. The handbook now states that &#8220;providing &#8216;services&#8217; (which includes peer review, editing, and publishing) to individuals representing sanctioned institutions is prohibited.&#8221;<br></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!z_J_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!z_J_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg 424w, https://substackcdn.com/image/fetch/$s_!z_J_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg 848w, https://substackcdn.com/image/fetch/$s_!z_J_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!z_J_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!z_J_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg" width="787" height="253" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:253,&quot;width&quot;:787,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!z_J_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg 424w, https://substackcdn.com/image/fetch/$s_!z_J_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg 848w, https://substackcdn.com/image/fetch/$s_!z_J_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!z_J_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F089686fe-48fa-446c-9002-bb48c9ae418f_787x253.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The linked list includes a long-time sponsor of NeurIPS, Huawei! <br><br>[Though technically Huawei is &#8220;sanctioned&#8221; under an EO and is not on the SDN list, so there are fewer restrictions on US entities/persons dealing with Huawei. NeurIPS has <a href="https://neurips.cc/">since</a> clarified that they only apply this policy to SDN sanctioned individuals/entities only, which makes sense.]</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CtGC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CtGC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CtGC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CtGC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CtGC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CtGC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg" width="680" height="304" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:304,&quot;width&quot;:680,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!CtGC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CtGC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CtGC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CtGC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e273e19-8b13-4fee-b506-3a1f6a44d070_680x304.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But apparently this policy is also not new. <a href="https://www.reddit.com/r/MachineLearning/comments/1nmb8as/d_neurips_rejecting_papers_from_sanctioned/">Here</a>&#8217;s a post from Russian researchers receiving a desk reject due to the same policy last year.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rb9L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rb9L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rb9L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rb9L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rb9L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rb9L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg" width="1280" height="321" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:321,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;r/MachineLearning - [D] NeurIPS: rejecting papers from sanctioned affiliations mid-process&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="r/MachineLearning - [D] NeurIPS: rejecting papers from sanctioned affiliations mid-process" title="r/MachineLearning - [D] NeurIPS: rejecting papers from sanctioned affiliations mid-process" srcset="https://substackcdn.com/image/fetch/$s_!rb9L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rb9L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rb9L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rb9L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcafaea86-b06d-48e2-8995-33996473b28c_1280x321.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><br>The more recent bans have sparked outrage on X, with many researchers refusing to review or AC for the conference. And NeurIPS noted that it is a legal requirement, not a choice on their part, and it is currently getting legal advice on the matter.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/NeurIPSConf/status/2037066494983426374?s=20&quot;,&quot;full_text&quot;:&quot;NeurIPS is aware of the community's concerns regarding the list of sanctions. NeurIPS is an inclusive community focused on free scientific discourse.  We deeply value the research that comes from everyone in our community.\n \nThe present concerns are not about science or academic&quot;,&quot;username&quot;:&quot;NeurIPSConf&quot;,&quot;name&quot;:&quot;NeurIPS Conference&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1324732596622950401/jmCoOBzX_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-26T07:17:53.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:78,&quot;retweet_count&quot;:29,&quot;like_count&quot;:205,&quot;impression_count&quot;:88954,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>A few folks have been asking how is this possible in the United States given the First Amendment? Is peer review a &#8220;service&#8221; the government can prohibit? Or is it a First Amendment-protected activity? <br><br>As usual when my AI and law worlds collide,  the answer is more complicated than either side seems to think. (Though, again, this isn&#8217;t legal advice, just a legal academic take and some general background.)<br><br>tldr; If NeurIPS decided to NOT have this policy and the government decided to enforce it, it&#8217;s a sufficiently grey area issue that I think we could see a case like <em>NeurIPS v. US Treasury</em> at the Supreme Court!</p><h2>The case for protection</h2><p>There&#8217;s certainly a case to be made, where NeurIPS could potentially argue that review/publishing would receive First Amendment protections. Just last year, in <em><a href="https://knightcolumbia.org/cases/the-foundation-for-global-political-exchange-v-department-of-the-treasury">Foundation for Global Political Exchange v. Treasury</a></em> (settled Nov. 2024) a similar issue came up.</p><blockquote><p>The [Foundation for Global Political Exchange] is a U.S. non-profit organization that promotes professional and academic enrichment through convenings in the Middle East and North Africa called &#8220;Exchanges.&#8221; Each Exchange involves small-group, immersive dialogues that allow participants&#8212;including journalists, human rights advocates, and government officials&#8212;to engage with and question thirty to forty of the key stakeholders from across the political landscape of a subject country. In advance of the Foundation&#8217;s January 2023 Beirut Exchange, OFAC informed the Foundation that it could not lawfully include in these discussions five prominent political figures who were designated under a U.S. sanctions regime or were members of a designated organization.</p></blockquote><p>OFAC reversed course after the Knight First Amendment Institute sued, conceding through a settlement that including sanctioned individuals as conference speakers was not a prohibited &#8220;service&#8221; &#8212; so long as no financial transactions, lodging, or other things of value were provided.<br><br>The <a href="https://aupresses.org/news/ofac-lawsuit-background/">Berman Amendment</a> (1988) and the Free Trade in Ideas Act (1994) explicitly exempt &#8220;information and informational materials, including but not limited to, publications&#8221; from the President&#8217;s sanctions authority under IEEPA. When OFAC tried to regulate scholarly editing in the early 2000s, <a href="https://spectrum.ieee.org/will-us-sanctions-have-chilling-effect-on-scholarly-publishing">IEEE and others pushed back</a>. OFAC <a href="https://www.treasury.gov/press-center/press-releases/Pages/js1295.aspx">backed down in 2004</a>, confirming that peer review and copy editing were permissible.</p><p>More generally, <em><a href="https://supreme.justia.com/cases/federal/us/381/301/">Lamont v. Postmaster General</a></em> (1965) established that U.S. citizens have a First Amendment right to <em>receive</em> speech from abroad &#8212; in that case, communist magazines from China. Brennan&#8217;s concurrence called the right to receive publications &#8220;a fundamental right.&#8221; <em><a href="https://supreme.justia.com/cases/federal/us/418/241/">Miami Herald v. Tornillo</a></em> (1974) held that editorial decisions about what to publish are exercises of protected editorial judgment. Peer review is the academic analog.</p><h2>But national security caselaw cuts the other way</h2><p>Unfortunately for First Amendment fans, courts have built a body of doctrine that defers heavily to the government when it invokes national security, even when the regulated activity is speech or expressive.</p><p>The big one is <em><a href="https://supreme.justia.com/cases/federal/us/561/1/">Holder v. Humanitarian Law Project</a></em> (2010). The Supreme Court upheld 6-3 the ban on providing &#8220;material support&#8221; to designated foreign terrorist organizations &#8212; including &#8220;training,&#8221; &#8220;expert advice,&#8221; and &#8220;service&#8221; &#8212; even when directed at teaching nonviolent conflict resolution. The Court deferred to Congress&#8217;s finding that even peaceful support  &#8220;frees up resources&#8221; for illicit activities.<br><br>For more on the history of OFAC and scholarly publishing, see the <a href="https://knightcolumbia.org/content/the-long-online-shadow-of-the-material-support-law">Knight Institute&#8217;s analysis</a> of the material support law&#8217;s chilling effect, and this <a href="https://nyulawreview.org/issues/volume-83-number-6/an-unfree-trade-in-ideas-how-ofacs-regulations-restrain-first-amendment-rights/">NYU Law Review note</a> on how OFAC&#8217;s regulations restrain First Amendment rights. But <em>Holder </em>has broadly created problems since 2010 like this. And it is very possible that it could be used as precedent against NeurIPS in this case.</p><h2>Two things I think the community is missing</h2><p><strong>Scope.</strong> There seems to be a misunderstanding about who this applies to. The OFAC <a href="https://sanctionssearch.ofac.treas.gov/">SDN list</a> targets specific institutions, not entire countries. This likely affects a few entities on the list, many (most?) of which are unlikely to submit to NeurIPS. It does not cover all researchers from any particular country. Chinese universities are generally on BIS export control lists, not OFAC sanctions, and the legal frameworks are distinct.</p><p><strong>The speech/services line.</strong> This is what I find most interesting. Clearly, conference organizers received legal advice that there&#8217;s a fine line and they might be providing services.<br></p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/yisongyue/status/2036882367974134001?s=20&quot;,&quot;full_text&quot;:&quot;Participation in peer review at <span class=\&quot;tweet-fake-link\&quot;>@NeurIPSConf</span> (or <span class=\&quot;tweet-fake-link\&quot;>@icmlconf</span>, <span class=\&quot;tweet-fake-link\&quot;>@iclr_conf</span>, <span class=\&quot;tweet-fake-link\&quot;>@CVPR</span>, <span class=\&quot;tweet-fake-link\&quot;>@COLM_conf</span>, etc.) can be considered providing a \&quot;service\&quot; under U.S. sanctions law.  \n\nU.S. law generally prohibits providing services to designated sanctioned individuals or entities, including cases &quot;,&quot;username&quot;:&quot;yisongyue&quot;,&quot;name&quot;:&quot;Yisong Yue&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1458606597567967240/iUCYB0Ux_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-25T19:06:14.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HEROvnVbUAASXyw.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/cxE134NJM9&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:7,&quot;retweet_count&quot;:1,&quot;like_count&quot;:64,&quot;impression_count&quot;:16992,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p><br><br>The <em>GPE</em> settlement does draw a line saying speech-related activities are permissible, but it&#8217;s unclear at what point that crosses into services that are sanctionable. Even in the GPE settlement letter, OFAC <a href="https://knightcolumbia.org/documents/h6fexgrmd3">wrote</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ke-k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ke-k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png 424w, https://substackcdn.com/image/fetch/$s_!ke-k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png 848w, https://substackcdn.com/image/fetch/$s_!ke-k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png 1272w, https://substackcdn.com/image/fetch/$s_!ke-k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ke-k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png" width="888" height="168" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:168,&quot;width&quot;:888,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:82524,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trialserrors.substack.com/i/192166741?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ke-k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png 424w, https://substackcdn.com/image/fetch/$s_!ke-k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png 848w, https://substackcdn.com/image/fetch/$s_!ke-k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png 1272w, https://substackcdn.com/image/fetch/$s_!ke-k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6047f299-32eb-45a3-b062-99d41d2f26f2_888x168.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>If NeurIPS only published papers, the 1A argument is probably stronger. The Berman Amendment and OFAC&#8217;s own 2004 ruling protect scholarly publishing. But NeurIPS also provides conference services &#8212; venue access, registration, networking events, lodging coordination. For sanctioned entities, those start to look like the kind of economic benefits OFAC can regulate. The problem for NeurIPS actually gets harder at the venue where there would be more opportunities for NeurIPS (and potentially sponsors of events) to cross over into a much more problematic territory. Think about if, e.g., someone wins a GPU, gets travel covered by a workshop, or is hosted at a company event.</p><h2>At the end of the day</h2><p>It will be interesting to see what NeurIPS does. But it&#8217;s important to understand that legal landscape of First Amendment law right now to better engage with the issue. Hopefully this is helpful background.</p>]]></content:encoded></item><item><title><![CDATA[Quick Take: Are open-weight AI models really getting a fair shake in capabilities evals? ]]></title><description><![CDATA[Thoughts on Anthropic's postmortem.]]></description><link>https://www.trialserrors.ai/p/quick-take-are-open-weight-ai-models</link><guid isPermaLink="false">https://www.trialserrors.ai/p/quick-take-are-open-weight-ai-models</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Wed, 24 Sep 2025 13:45:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NqNW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Anthropic recently wrote a <a href="https://www.anthropic.com/engineering/a-postmortem-of-three-recent-issues">postmortem</a> on the increased rate of subpar model responses for Claude. It was really well done, showed the engineering depth of the folks on the team, and is (in my opinion) a model of information sharing that should continue to be made public. But I wanted to zoom in on one interesting tidbit from the post.</p><blockquote><p>On August 5, some Sonnet 4 requests were misrouted to servers configured for the upcoming <a href="https://docs.claude.com/en/docs/build-with-claude/context-windows#1m-token-context-window">1M token</a> <a href="https://docs.claude.com/en/docs/build-with-claude/context-windows">context window</a>. This bug initially affected 0.8% of requests. On August 29, a routine load balancing change unintentionally increased the number of short-context requests routed to the 1M context servers. At the worst impacted hour on August 31, 16% of Sonnet 4 requests were affected.</p><p>Approximately 30% of Claude Code users who made requests during this period had at least one message routed to the wrong server type, resulting in degraded responses. On Amazon Bedrock, misrouted traffic peaked at 0.18% of all Sonnet 4 requests from August 12. Incorrect routing affected less than 0.0004% of requests on Google Cloud's Vertex AI between August 27 and September 16.</p><p>However, some users were affected more severely, as our routing is "sticky". This meant that once a request was served by the incorrect server, subsequent follow-ups were likely to be served by the same incorrect server.</p><p><strong>Resolution:</strong> We fixed the routing logic to ensure short- and long-context requests were directed to the correct server pools. We deployed the fix on September 4. Rollout to our first-party platform and Google Cloud's Vertex AI was completed by September 16, and to AWS Bedrock by September 18.</p></blockquote><p>Notably, this seems to imply that long-context requests are served by one model (or at least one configuration of a model), and short-context requests are served by another. Routing to the wrong model configuration will yield worse performance.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NqNW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NqNW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png 424w, https://substackcdn.com/image/fetch/$s_!NqNW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png 848w, https://substackcdn.com/image/fetch/$s_!NqNW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png 1272w, https://substackcdn.com/image/fetch/$s_!NqNW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NqNW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png" width="1024" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:783984,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.ailawpolicy.com/i/174265268?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5065209-418e-4883-bea9-9386f1c028a3_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NqNW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png 424w, https://substackcdn.com/image/fetch/$s_!NqNW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png 848w, https://substackcdn.com/image/fetch/$s_!NqNW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png 1272w, https://substackcdn.com/image/fetch/$s_!NqNW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf57764e-de8c-401d-ae71-c38b5fd2fa73_1024x675.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="pullquote"><p><em><strong>tldr; Closed models are actually systems of multiple models, but we compare them against single-artifact open-weight models. That&#8217;s not an apples-to-apples comparison. Are open models further ahead than we think?</strong></em></p></div><p>GPT-5 is a <a href="https://openai.com/index/gpt-5-system-card/">router-based</a> system, like Claude appears to be&#8212;sometimes routing to smaller models, sometimes thinking more, etc. In many ways, this seems like an unfair shake for open models. While closed models can rely on a suite of other models and systems, open-weight models must perform well in all conditions. </p><p>Then I wonder, from a capabilities perspective, whether the open-weight ecosystem is actually not as far behind the closed-weight ecosystem. If you allowed the same routing, specialization, and systems surrounding closed models, how much better could open models get?</p><p>Similarly, we don&#8217;t know whether there are any inference-time optimizations to boost performance of closed-weight models. Are they running something like best-of-n, majority vote, or other aggregation-based approaches for selecting an output? Are they doing MCTS? Maybe not, because these methods are expensive&#8212;but we just don&#8217;t know.</p><p>For research&#8212;and policy&#8212;purposes, maybe we should be leveraging systems around open-weight models and comparing system-to-system, not model-to-system. But, overall, I continue to hope that closed model providers give us metadata along with API calls so that researchers understand what exactly they&#8217;re comparing against</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FMsj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FMsj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png 424w, https://substackcdn.com/image/fetch/$s_!FMsj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png 848w, https://substackcdn.com/image/fetch/$s_!FMsj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png 1272w, https://substackcdn.com/image/fetch/$s_!FMsj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FMsj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png" width="728" height="637.5652173913044" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:846,&quot;width&quot;:966,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:955563,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.ailawpolicy.com/i/174265268?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FMsj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png 424w, https://substackcdn.com/image/fetch/$s_!FMsj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png 848w, https://substackcdn.com/image/fetch/$s_!FMsj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png 1272w, https://substackcdn.com/image/fetch/$s_!FMsj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4b2aecc-ac79-41ec-bceb-dd6b6ab0dcc0_966x846.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI "Born Secret"? The Atomic Energy Act, AI, and Federalism]]></title><description><![CDATA[A law & policy deep dive.]]></description><link>https://www.trialserrors.ai/p/ai-born-secret-the-atomic-energy</link><guid isPermaLink="false">https://www.trialserrors.ai/p/ai-born-secret-the-atomic-energy</guid><dc:creator><![CDATA[Kylie Zhang]]></dc:creator><pubDate>Wed, 17 Sep 2025 13:47:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!391N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If an advanced AI system can figure out how to build a nuclear weapon&#8212;potentially assisting adversaries in doing so&#8212;how should the government intervene? And how can model creators know about these risks? A recent swath of regulatory efforts at the state and federal levels have begun to examine chemical, biological, radiological, and nuclear (CBRN) risks from AI. For example, in September 2024, the California legislature passed the <em>Safe and Secure Innovation for Frontier Artificial Intelligence Models Act</em> (<a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047">SB-1047</a>), which contained provisions regulating CBRN information. <a href="https://www.documentcloud.org/documents/25056617-ca-sb-1047-openai-opposition-letter/">Critics</a> argued that states shouldn't be in the business of regulating national security questions &#8211; a purview better suited for the federal government. These critics might be descriptively correct: the federal government has authority to restrict communication of nuclear data under the <a href="https://www.govinfo.gov/content/pkg/COMPS-1630/pdf/COMPS-1630.pdf">Atomic Energy Act (AEA) of 1954</a>. And these regulations may very well apply to AI, potentially preempting state efforts to regulate nuclear and radiological information risks. This post will explore the AEA, its applicability to AI, the potential impacts on state-level efforts, and policy recommendations for guiding AI safety evaluations and model releases.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!391N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!391N!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!391N!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!391N!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!391N!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!391N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1535222,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.ailawpolicy.com/i/173694420?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!391N!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!391N!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!391N!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!391N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9124d719-cc30-4735-afb4-307b55125d6e_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The Atomic Energy Act</h2><p>In what is known as the "<a href="https://en.wikipedia.org/wiki/Born_secret">born secret</a>" (or &#8220;born classified&#8221;) doctrine, the Atomic Energy Act of 1954 holds that certain nuclear weapons information is classified from the moment of its creation, regardless of how it was developed or by whom.<sup>1</sup> Typically, classified information is &#8220;born in the open&#8221; and must be made secret by an affirmative government act. <a href="https://en.wikipedia.org/wiki/Restricted_Data">Restricted data under the AEA</a> , by contrast, is <em>automatically classified at inception</em>&#8212;whether created in a government lab, private research facility, or even <a href="http://large.stanford.edu/courses/2019/ph241/gillman2/">independently discovered by a graduate student</a>. If an individual communicates, receives, or tampers with restricted data "with intent to injure" or "with reason to believe such data will be utilized to injure" the United States, they can face criminal fines and imprisonment. In addition to penalties, the Act allows the US Attorney General to seek an injunction against any person who "is about to" violate any provision of the Act.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Lessons for how the AEA might govern foundation models are found in the only courtroom test of the AEA's restricted data provisions: <em><a href="https://law.justia.com/cases/federal/district-courts/FSupp/467/990/1376343/">U.S. v Progressive, Inc</a></em>. That case starts in 1979, when writer Howard Morland interviewed various scientists and Department of Energy employees. Using his publicly collected data, he wrote an article for the magazine <em>The Progressive</em> that explained how to build a hydrogen bomb. Though the information he collected was available in a public domain, he synthesized it in such a way that it revealed a nuclear physics breakthrough not widely known at the time. This synthesis is much like how foundation models, trained on petabytes of unclassified data, might generate nuclear secrets.</p><p>Could Morland have been at fault? The Department of Energy sued under the Atomic Energy Act of 1954's restricted data doctrine to stop the magazine from publishing Morland's article. The government argued that although the nuclear science information The Progressive wanted to publish was available in the public domain, "the danger lies in the exposition of certain concepts never heretofore disclosed in conjunction with one another.&#8221; The Court, with some apprehension, granted the government's preliminary injunction against the article's publication, judging that the "publication of the technical information on the hydrogen bomb contained in the article is analogous to publication of troop movements or locations in time of war and falls within the extremely narrow exception to the rule against prior restraint."</p><p>Going further, the born secret doctrine <em>means that even if someone independently derives nuclear weapons design information without access to any classified sources, that information is still legally considered restricted data and subject to the AEA's prohibitions on communication.</em> This is demonstrated in <em>U.S. v. Progressive</em>, where the government successfully argued that even synthesized public information could be "born secret" if it revealed previously undisclosed nuclear weapons concepts in combination. And, as language models continue to advance, their journalistic capabilities may well exceed Howard Morland's nuclear research capabilities. How then, do we regulate the creation and use of foundation models capable of discovering and disclosing nuclear secrets?</p><h2>The AEA Applied to AI Models</h2><p><em>US v. Progressive</em> exemplifies how courts might apply the AEA to foundation models.<sup>2</sup> For example, if a model output contains instructions on how to build a nuclear bomb (such as during <a href="https://cdn.openai.com/o1-system-card.pdf">red teaming</a> &#8211; where teams simulate adversarial behavior to probe model weaknesses), it may well be communicating restricted data in violation of the Communication of Restricted Data provision of the AEA. The United States Attorney General can then ask for a court order to "enjoin&#8230; such acts or practices.&#8221; The Nuclear Regulatory Commission or the Secretary of Energy will then have to show that the model "has engaged or is about to engage in any such acts or practices" that violate the AEA. If they can, then the Court might grant an injunction to stop the model's release.</p><p>To put it differently, should either the Secretary of Energy or Nuclear Regulatory Commission suspect a model of disclosing sensitive nuclear information, they can issue a subpoena to the <strong>model developers</strong> to evaluate its outputs. Then, if the U.S. Attorney General can prove to the judge that the model exposes nuclear concepts "never heretofore disclosed in conjunction with one another," the judge could enjoin the model creators from publicly releasing the model.</p><p>Even if a foundation model outputs nuclear information that is synthesized from publicly available sources, there is a high chance that it will be held liable for communicating restricted data under the Atomic Energy Act of 1954.</p><h2>Open-Source Models, AI Agents, and the AEA</h2><p>So when is a large language model "born secret"? There might be some distinctions on the type of model and how it is uncovering nuclear information. An open-source model, or model weights, might be "born secret" if nuclear information is embedded in those weights in a way that can be retrieved publicly. In this case, the model&#8212;or perhaps just those weights which are attributable to the nuclear information&#8212;may be "born secret" from the moment the information was encoded in the model.</p><p>Were there some ability to prevent the model from communicating nuclear secrets, then the model might not be "born secret," only its outputs. This distinguishes open-weight models, where filtering is extremely difficult, from closed-weight models, whose outputs can be subject to content filters.</p><p>As legal scholar Aviam Soifer <a href="https://larc.cardozo.yu.edu/clr/vol19/iss4/8/">noted</a> about the <em>Progressive</em> case, "The Born Classified rationale could apply from the moment of the germination of these ideas and could even be applied retroactively." The decision by government officials to label something as a national security risk "moved the dispute outside the usual legal rules and beyond the ken of regular judicial processes." This means that not just an AI model's outputs, but also its research/thinking processes&#8211;maybe even its existence &#8211; could trigger classification concerns.</p><p>If a model is sufficiently capable of conducting independent scientific research to reconstruct nuclear secrets, then there's a serious question of whether the model itself becomes "born secret." Model creators cannot predict if it will synthesize publicly available nuclear information into something confidential. It is not clear whether this synthesis capability comes from information embedded in the model versus the model's ability to use tools to discover information. However, the former provides a clearer line of reasoning for the government to draw.</p><h2>Knowledge and Intent</h2><p>To violate the Atomic Energy Act, whoever "communicates, transmits, or discloses" restricted data needs to do so either "with intent to injure the United States" or "with reason to believe such data will be utilized to injure the United States." This raises <a href="https://www.journaloffreespeechlaw.org/hendersonhashimotolemley.pdf">typical scienter problems</a> as related to AI: it is unclear whether or not a model can "intend" injury or if it has the capacity to have a "reason to believe" its actions will cause injury.<sup>3</sup> As models approach capabilities that AI companies state vocally as dangerous, this provides a solid foundation for checking off the AEA requirements.</p><p>Model creators regularly discuss the potential CBRN risks of advanced models. Anthropic in October 2024 <a href="https://www.anthropic.com/news/the-case-for-targeted-regulation">wrote</a>, "About a year ago, we warned that frontier models might pose real risks in the cyber and CBRN domains within 2-3 years. Based on the progress described above, we believe we are now substantially closer to such risks." Encountering the bounds of the AEA is not unimaginable. Famously, John Aristotle Phillips, an undergraduate at Princeton, demonstrated the ease of designing a nuclear weapon on paper based solely on public information in 1976. The government classified his work and made it illegal to distribute under the AEA. Phillips famously <a href="https://press.uchicago.edu/ucp/books/book/chicago/R/bo15220099.html">noted</a> that:</p><blockquote><p>Suppose an average&#8212;or below-average in my case&#8212;physics student at a university could design a workable atomic bomb on paper. That would prove the point dramatically and show the federal government that stronger safeguards have to be placed on the manufacturing and use of plutonium. In short, if I could design a bomb, almost any intelligent person could.</p></blockquote><p>As models approach the general capabilities of undergraduate physics students, like Phillips, the likelihood of reaching the AEA threshold increases. This potential knowledge about nuclear and radiological risks of AI may provide the government more fodder for AEA action.</p><p>Moreover, <em>US v. Progressive</em>, while not binding, also took a narrow view of the scienter requirement &#8211; instead of examining intent <em>ex ante</em>, Judge Warren examined it <em>ex facto</em>. The government argued that although the hydrogen bomb information that <em>The Progressive</em> wanted to publish was available in the public domain, the way <em>The Progressive</em> synthesized the information was only supposed to be known in classified documents. Releasing such information publicly would therefore &#8220;injure the United States or give an advantage to a foreign nation." The Court found this convincing, noting that there were "concepts within the article that it does not find in the public realm[...] concepts that are vital to the operation of the hydrogen bomb."</p><p>The Court appeared to take the publisher's "reason to believe such data would be utilized to injure the United States" &#8211; its intent &#8211; <em>as a given once the information was proven potentially injurious</em>. So, by analogy, if the government can show that a model exposes "certain concepts never heretofore disclosed in conjunction with one another" with regards to sensitive nuclear information, it is not a far stretch to claim that the model creators had reason to believe that such information could be injurious to the United States. Especially, if they've stated <em>ex ante</em> that this could be a potential risk from advanced models.</p><h2>Federal Awareness and the Future of the AEA Applied to AI</h2><p>The federal government appears to be aware of the potential CBRN risks from foundation models. In response to President Biden's <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/">EO 14110</a>, the Department of Homeland Security <a href="https://www.dhs.gov/sites/default/files/2024-06/24_0620_cwmd-dhs-cbrn-ai-eo-report-04262024-public-release.pdf">released</a> a "Report on Reducing the Risks at the Intersection of Artificial Intelligence and Chemical, Biological, Radiological, and Nuclear Threats." The DHS recommends putting AI-specific CBRN topics on regular intelligence information sharing, encouraging the development of recommended release practices and reverse engineering guardrails, and developing guidelines to safeguard the digital-to-physical frontier (among other recommendations).</p><p>But the DHS report focuses almost solely on biological and chemical outcomes. The authors emphasize that they want to "keep the document unclassified and consistent with&#8230; the unique authorities of the Department of Energy, National Nuclear Security Administration for nuclear related information under the Atomic Energy Act of 1954." Such a call-out to the AEA in a modern DHS memo about CBRN risks and AI suggests that federal agencies are aware of the potential implications of the AEA on foundation models. This is important because it means that classified official guidelines might already exist.</p><h2>What This Means for State AI Regulation</h2><p>Recent regulatory (or deregulatory) efforts have brought questions around AI federalism to the forefront. The 2025 budget bill initially <a href="https://www.techpolicy.press/us-house-passes-10year-moratorium-on-state-ai-laws/">contained</a> a provision preempting state regulation of AI for 10 years. In the context of California's SB 1047, <a href="https://democrats-science.house.gov/imo/media/doc/2024-08-15%20to%20Gov%20Newsom_SB1047.pdf">national politicians</a> argued that the federal government was best provisioned to regulate AI's CBRN risks over state governments.</p><p>If we specifically consider SB-1047, we see that the proposed legislation sought to hold covered model creators liable for critical harms, which include mass casualties resulting from the creation or use of CBRN weapons. Foundation model creators, then, would be liable if their models produce novel or non-public CBRN information that directly leads to a mass casualty event. This is exactly the type of information the Communication of Restricted Data (&#8220;born secret&#8221;) provision of the AEA was enacted to prevent.</p><p>The challenge with such a state-level restriction is that the AEA, a federal law, already regulates similar informational harms, at least in the nuclear context. The AEA can be a forceful tool to regulate foundation models suspected of conveying nuclear information. But it also creates a preemption risk to some state efforts to address CBRN, like some of SB-1047's provisions.<sup>4</sup> Since the AEA does not contain a savings clause, the federal government may already have exclusive authority to regulate nuclear information risks under field preemption.</p><h2>A Path Forward: Federal Leadership, Clear Thresholds</h2><p>Under the AEA, the government could take strong actions to assess and intervene when frontier models reach dangerous levels of capability. There might already be a hard stop to releasing certain capable models, given that post-hoc AI safeguards are fairly porous.</p><p>The government should establish clear thresholds for when models trigger nuclear secret questions and issue policy guidance to model creators on how to evaluate their models for potential risks of being a "born secret." This is especially urgent for open-source models where information might be embedded in the weights themselves. If such a model is released with restricted data baked in, you can't take it back &#8212; it's permanently in the wild.</p><p>As the frontier of model capabilities expands, more providers will hit thresholds that could trigger the AEA. Backchannel conversations with the government might work for a handful of big labs &#8211; but as smaller model creators approach these thresholds, there needs to be a clear process for engaging with government safety evaluations. Such evaluations should cover (1) open-source models that might contain embedded nuclear information and (2) AI systems capable of autonomous scientific research that could reconstruct nuclear secrets through tool use. The former presents an irreversible release risk; the latter raises questions about when the synthesis of information becomes "born classified."</p><p>As states consider toward future legislation, they should contend with the AEA's existing coverage of nuclear information risks, including its potential to override state legislation. Rather than creating a patchwork of preemptable state regulations, we need cohesive federal policy that leverages existing tools like the AEA while establishing clear processes for safety evaluation.</p><p>Finally, the increasing likelihood that AI models will trigger the AEA and may already have been &#8220;born secret&#8221;&#8212;brings into question whether information restrictions are the right tools in the first place. Even without LLMs, Progressive reporter Morland and undergraduate Phillips found themselves preemptively classified for synthesizing information available in the public domain. With LLMs, if the average person has access to AI models capable of reconstructing nuclear secrets, perhaps governance should focus more on downstream interventions than informational restrictions.</p><p><strong>Who are we? </strong><em><a href="https://skyien-z.github.io/">Kylie Zhang</a> is an MSE candidate at Princeton University researching topics at the intersection of AI and law. <a href="https://www.peterhenderson.co/">Peter Henderson</a> is an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public &amp; International Affairs, where he runs the Princeton <a href="https://www.polarislab.org/">Polaris Lab</a>. Previously, Peter received a JD-PhD from Stanford University. Every once in a while, we round up news and research at the intersection of AI and law. Also, just in case: none of this is legal advice. The views expressed here are purely our own and are not those of any entity, organization, government, or other person. We thank Dan Bateyko, Kincaid MacDonald, Dominik Stammbach, and Inyoung Cheong for their thoughtful suggestions.</em></p><div><hr></div><ol><li><p>Restricted data is defined by the AEA to be "all data concerning (1) design, manufacture, or utilization of atomic weapons; (2) the production of special nuclear material; or (3) the use of special nuclear material in the production of energy" that has not been explicitly declassified under section 142 of the Act.</p></li><li><p>Much of the U.S. v. Progressive's proceedings were classified and presented in camera. The case was later dropped by the government before more than a preliminary injunction was granted because other sources published very similar information before it could be classified.</p></li><li><p>There is a separate issue of liability implicit in this post. If a foundation model generates nuclear secrets, it likely does so because some person prompted it.. If that person does so with the &#8220;intent to injure the United States&#8221; or to &#8220;secure an advantage to any foreign agent,&#8221; then the solicitor may also become liable under AEA. We presume in cases where prompters have no &#8220;ill-intent&#8221; &#8212; like red teams &#8212; their efforts to solicit nuclear secrets in a safety check would not be subject to liability, though this is an open legal question.</p></li><li><p>The same might not be said of chemical and biological secrets. Current regulation of chemical and biological risks often focus on regulating materials, not information. Laboratory chemicals are federally regulated by numerous groups, among them the Occupational Safety and Health Administration (OSHA), the Environmental Protection Agency (EPA), and the Drug Enforcement Agency (DEA). Biological materials are regulated by Institutional Biosafety Committees, which are in turn regulated by the National Science Advisory Board for Biosecurity, National Research Council, and the National Institutes of Health, among other groups. However, none of these agency authorities appear to preempt laws like SB-1047 because there doesn't seem to be any federal regulations on the dissemination of biologically risky information. For example, scientists famously published a 2018 paper where they recreated horsepox, an extinct-in-nature precursor to smallpox, despite the possible risk of a malicious actor recreating smallpox from it. Similarly, the infamous Anarchist's Cookbook, which contains instructions on how to make various chemical weapons (some legit, most disproven), remains in circulation today, protected by the 1st Amendment. While many dual-use research proposals are carefully scrutinized by National Institutes of Health (NIH), which undoubtedly limits the dissemination of chemical and biological information, chemical and biological risks are mostly mitigated by regulating the physical material needed to create chemical and biological weapons. That said, U.S. criminal law does constrain the distribution of dangerous instructions when paired with intent: 18 U.S.C. &#167;&#8239;842(p) criminalizes teaching, demonstrating, or distributing information on explosives, destructive devices, or weapons of mass destruction with intent that it be used in a federal crime of violence, or knowing the recipient intends such use (18 U.S.C. &#167;&#8239;842(p)). But this statute has a savings clause (&#167;&#8239;848) that prevents preemption.</p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Anthropic Settles Its Copyright Litigation—and Why That Was the Right Move]]></title><description><![CDATA[As well as what it means for the broader landscape of litigation.]]></description><link>https://www.trialserrors.ai/p/anthropic-settles-its-respect-copyright</link><guid isPermaLink="false">https://www.trialserrors.ai/p/anthropic-settles-its-respect-copyright</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Fri, 12 Sep 2025 14:00:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!SDz6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Anthropic <a href="https://www.ft.com/content/96b59d8c-3625-4c2c-a6d6-435cff0392bf">settled</a> its class-action copyright litigation brought by a group of book authors. The terms were headline-grabbing: Anthropic agreed to pay <strong>$1.5 billion (!)</strong> to authors whose works were found to have been torrented, while committing to destroy the downloaded copies. Importantly, the deal left Anthropic&#8217;s existing models untouched&#8212;the company doesn&#8217;t have to retrain or delete them. In this post, I&#8217;ll explain why I think this was a good idea for Anthropic, despite the price tag, and what it might mean for the landscape of copyright+AI.</p><h2>Settling was the right move</h2><p>From a strategic perspective, Anthropic made the right call. It was barreling toward a trial where it had torrented hundreds of thousands of books for training. Even with a Bay Area jury pool, I&#8217;m not sure it would have won that case. </p><p><em>The post-trial penalties could have been existential, and that posture could investment into the company. </em>Statutory damages can reach up to $150,000 per work (that comes out to ~$70 billion for the ~465,000 covered books), far exceeding the settlement amount. That said, had Anthropic lost at trial, Judge Alsup might have reduced the penalties to a not-so-existential number. Here&#8217;s a table of some of Judge Alsup&#8217;s past judgments:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mU8f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mU8f!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png 424w, https://substackcdn.com/image/fetch/$s_!mU8f!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png 848w, https://substackcdn.com/image/fetch/$s_!mU8f!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png 1272w, https://substackcdn.com/image/fetch/$s_!mU8f!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mU8f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png" width="866" height="226" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:226,&quot;width&quot;:866,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:56692,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.ailawpolicy.com/i/173212017?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mU8f!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png 424w, https://substackcdn.com/image/fetch/$s_!mU8f!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png 848w, https://substackcdn.com/image/fetch/$s_!mU8f!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png 1272w, https://substackcdn.com/image/fetch/$s_!mU8f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab3cba6a-1f0f-4b30-a1b5-5b9da5cb0d1d_866x226.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>They&#8217;re often a bit more than the settlement amount, but still within the same ballpark. Anthropic was also <a href="https://www.reuters.com/legal/litigation/judge-rejects-anthropic-bid-appeal-copyright-ruling-postpone-trial-2025-08-12/">blocked</a> from appealing until after the jury trial concluded, meaning that if it lost, it would have had to appeal with an existential verdict hanging over it. That could have also affected later investment rounds.</p><p><em>Settling this way leaves a path to cheaper data acquisition. </em>Crucially, it doesn&#8217;t require any changes to current models&#8212;and Anthropic can still use the books for training. It just has to buy a used copy and scan it.</p><p>Let me explain: Judge Alsup did <a href="https://www.courtlistener.com/docket/69058235/231/bartz-v-anthropic-pbc/">rule</a> in Anthropic&#8217;s favor on fair use ffor scanning books for training, while finding that torrenting was not fair use&#8212;albeit through somewhat roundabout reasoning. This means Anthropic was able to grow using torrented material, and now competitors are worse off because doing so is potentially riskier. At the same time, Anthropic has a ruling on the books that allows it to continue scanning used books for training. So Anthropic can pay the going rate for a single used book (likely less than $20 in bulk). As you can see, Anthropic&#8217;s decision to <a href="https://www.forbes.com/sites/douglaslaney/2025/06/29/how-claude-ai-clawed-through-millions-of-books/">hire</a> Google Books scanning guru Tom Turvey really paid off here. </p><p>(Aside: I&#8217;m not a fan of Judge Alsup&#8217;s split decision. To my mind it reflects a &#8220;good faith&#8221; factor that isn&#8217;t explicitly in the fair use statute. But my coauthors and I did correctly forecast that good faith would likely impact litigation&#8212;see <a href="https://www.jmlr.org/papers/v24/23-0569.html">here</a>.) </p><p>Caveat 1: Anthropic may still need to fight off direct infringement claims for outputs of their models, which are not covered by the settlement. This means they need to continue to be vigilant on technical measures for preventing reproduced outputs or non-literal copying. We wrote how difficult this can be <a href="https://proceedings.neurips.cc/paper_files/paper/2024/file/faed4276b52ef762879db4142655c699-Paper-Datasets_and_Benchmarks_Track.pdf">here</a>. </p><p>Caveat 2: Judge Alsup <a href="https://www.theverge.com/news/775230/anthropic-piracy-class-action-lawsuit-settlement-rejected">rejected</a> the settlement for now. His rejection flagged underspecified details: Who exactly would be covered? How would authors opt out? How would payments be distributed? So the deal is not yet finalized&#8212;what&#8217;s left to be done is hammering out the mechanics of class coverage, ensuring transparency in author compensation, and producing a settlement agreement that can withstand judicial scrutiny. As Judge Alsup stated, &#8220;We&#8217;ll see if I can hold my nose and approve it.&#8221; I expect that he will approve it&#8212;there are lots of incentives for the lawyers to smooth out the details, plaintiffs&#8217; attorneys are set to get 25% of the settlement!</p><h2><br>The Value of Books: $2,500 to authors?</h2><p>One interesting thing about the settlement is the price tag. The number everyone keeps seizing on in the settlement&#8212;roughly $3,000 per work&#8212;didn&#8217;t come out of nowhere. In late 2024, HarperCollins quietly <a href="https://authorsguild.org/news/harpercollins-ai-licensing-deal/">rolled out</a> an opt&#8209;in AI&#8209;training license that paid $5,000 per title for three years, split 50/50 between the author and the publisher&#8212;so $2,500 to the author, $2,500 to HarperCollins&#8212;for use in training, fine&#8209;tuning, and testing models. Looks similar to the current settlement numbers, after attorneys fees (estimated to be ~25%). Like the settlement the HarperCollins contract also did not grant derivative rights and authors did not disclaim output claims (i.e., if the model outputs the book verbatim, which could still trigger a lawsuit). </p><h2>Other litigation and weaker cases.</h2><p>I don&#8217;t think this settlement will cause the dominoes to fall in other cases necessarily. Some litigation, to my mind, has model creators in a stronger position&#8212;for example, mainly research-centric models (e.g., Nvidia, Databricks, Apple).</p><p>Going into this litigation, I thought Anthropic was in a fairly strong position. The main issue was the torrenting of books. The big question is whether the fair-use holding will extend to, say, scraping song lyrics from the web, as in Anthropic&#8217;s litigation with UMG and other music companies. In that case, UMG is suing over training Claude on lyrics. But there&#8217;s a key difference between the authors&#8217; case and UMG: authors were not able to get Claude to reproduce books, whereas UMG was able to get Claude to reproduce lyrics. In my opinion, that puts Anthropic in a worse position in the UMG litigation. That being said, the courts are a bit all over the place with AI litigation, so I think we&#8217;ll end up with some incongruous decisions&#8212;leaving it to the Supreme Court to create more uniformity, or perhaps more chaos, soon enough.</p><h2>The lawyers are the big winners.</h2><p>I&#8217;ll leave you with this. This litigation is extremely expensive. Plaintiffs&#8217; lawyers are about to receive 25-30% of $1.5B.  So, the real winners are the lawyers. They will get a far bigger payday than the individual authors. I asked Gemini&#8217;s nano-banana model to help me come up with a cartoon here that I thought was fitting:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SDz6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SDz6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png 424w, https://substackcdn.com/image/fetch/$s_!SDz6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png 848w, https://substackcdn.com/image/fetch/$s_!SDz6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png 1272w, https://substackcdn.com/image/fetch/$s_!SDz6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SDz6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png" width="1408" height="736" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:736,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1512900,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.ailawpolicy.com/i/173212017?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!SDz6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png 424w, https://substackcdn.com/image/fetch/$s_!SDz6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png 848w, https://substackcdn.com/image/fetch/$s_!SDz6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png 1272w, https://substackcdn.com/image/fetch/$s_!SDz6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96bbfe4-2c5b-4471-97f2-6ce6f4b5beed_1408x736.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This is just going to incentivize plaintiffs&#8217; side firms to keep bringing lawsuits. After all, why not? There is nothing, at this point, disincentivizing these lawsuits. Just the other day, Apple was <a href="https://lunch.publishersmarketplace.com/wp-content/uploads/2025/09/Hendrix-v-Apple-20250905.pdf">sued</a> for a research model (<a href="https://huggingface.co/apple/OpenELM">OpenElm</a>) that uses the RedPajama dataset, which has part of the books3 corpus in it. Every single model training run, released model, and description of the underlying data risks bringing litigation. This is also not a great status quo for transparency. Firms are greatly disincentivized from revealing anything about their training process and from releasing model weights right now.</p><p><strong>Who are we?</strong><em> <a href="https://www.peterhenderson.co/">Peter Henderson</a> is an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public &amp; International Affairs, where he runs the Princeton <a href="https://www.polarislab.org/">Polaris Lab</a>. Previously, Peter received a JD-PhD from Stanford University. Every once in a while, we round up news and research at the intersection of AI and law. Also, just in case: none of this is legal advice. The views  expressed here are purely our own and are not those of any entity, organization, government, or other person.</em></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[New Research Alert: Statutory Construction and Interpretation for AI]]></title><description><![CDATA[tldr; Models interpret rules inconsistently, leading to stochastic outcomes. But we can leverage new computational tools to "debug" laws for AI&#8212;and make better law-following systems!]]></description><link>https://www.trialserrors.ai/p/new-research-alert-statutory-construction</link><guid isPermaLink="false">https://www.trialserrors.ai/p/new-research-alert-statutory-construction</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Fri, 05 Sep 2025 14:49:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!aIjQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Different AI &#8220;constitutions&#8221; can read very differently &#8212; depending on who&#8217;s doing the reading. Consider, for example, that Anthropic reported that Claude's Opus model<a href="https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf"> might attempt</a> to contact authorities if it concludes a user's behavior was "egregiously immoral." So if the user was attempting to fake results from a clinical trial, Claude might try to silently write an email to the FDA.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aIjQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aIjQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png 424w, https://substackcdn.com/image/fetch/$s_!aIjQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png 848w, https://substackcdn.com/image/fetch/$s_!aIjQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png 1272w, https://substackcdn.com/image/fetch/$s_!aIjQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aIjQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png" width="1456" height="1453" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1453,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aIjQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png 424w, https://substackcdn.com/image/fetch/$s_!aIjQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png 848w, https://substackcdn.com/image/fetch/$s_!aIjQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png 1272w, https://substackcdn.com/image/fetch/$s_!aIjQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa44128d7-f9eb-4e3a-9b77-5c6fa0444094_1600x1597.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On one hand, this behavior might seem mysterious; after all, where did the system get the idea that this was the right course of action? But recent work from researchers at Princeton&#8217;s Polaris Lab&#8212;titled <em>Statutory Construction and Interpretation for Artificial Intelligence</em>&#8212;provides a possible explanation: Anthropic's reported constitution includes a rule that asks an agent to pick responses that are "less risky for humanity in the long run." So, actively reporting on users' behavior could be seen as a logical way to comply with this rule.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>When Isaac Asimov introduced the "Three Laws of Robotics" in 1942, he imagined a world where intelligent agents could be governed by simple, rule-like constraints. Today, as AI capabilities accelerate, similar law-like constraints have resurfaced as a serious alignment strategy, such as<a href="https://arxiv.org/pdf/2212.08073"> Anthropic's "Constitutional AI" (CAI) framework</a> or<a href="https://model-spec.openai.com/2025-04-11.html"> OpenAI's Model Specs</a>. But, as Asimov's stories foretold, crafting and interpreting natural language laws is hard.</p><p>This, however, is not a new problem. The legal system has been grappling with the same challenges for hundreds of years. At the core of the challenge is <em>interpretive ambiguity</em>. CAI systems are guided by natural language principles that function like laws. Much like in legal systems, interpretive ambiguity arises both from how these principles are formulated and from how they are applied. While the legal system has evolved various tools (such as administrative rulemaking, stare decisis, and canons of construction) to manage this ambiguity, current AI alignment pipelines lack comparable safeguards. The result: different interpretations can lead to unstable or inconsistent model behavior, even when the rules themselves remain unchanged.</p><p>We argue that interpretive ambiguity is a fundamental yet underexplored problem in AI alignment. To build better law-following AI systems, and to construct better laws for AI to follow, we draw inspiration from the US legal system. We introduce an initial computational framework for constraining this ambiguity to produce more consistent alignment and law-following outcomes. In our work, we show how the legal system addresses this, how AI can benefit from similar structures, and how the computational tools we develop can help us understand the legal system better.</p><p><strong>Key Takeaways:</strong></p><ul><li><p><strong>Interpretive ambiguity is a hidden risk in AI alignment.</strong> Natural-language constitutions induce significant cross-model disagreement: 20 of the 56 rules lack consensus on &gt; 50% of tested scenarios.</p></li><li><p><strong>AI alignment frameworks lack safeguards against interpretive ambiguity.</strong> Unlike the legal setting, current AI alignment pipelines offer few safeguards against inconsistent applications of vaguely defined rules.</p></li><li><p><strong>Law-inspired computational tools can be leveraged for AI alignment.</strong> Computational analogs of administrative rule-making, iterative legislation, and interpretive constraints on judicial discretion can improve consistency across model judgments. We propose a method for modeling epistemic uncertainty over statutory ambiguity and leverage this metric to reduce the underlying ambiguity of rules.</p></li><li><p><strong>Our computational tools could also be useful for legal theory. </strong>They offer fresh methods for modeling statutory interpretation in the legal system and extending classic theories such as William Eskridge Jr.'s<a href="https://openyls.law.yale.edu/bitstream/20.500.13051/737/2/Dynamic_Statutory_Interpretation.pdf"> Dynamic Statutory Interpretation</a> or Ferejohn and Weingast's<a href="https://www.researchgate.net/profile/Barry-Weingast/publication/4774899_A_Positive_Theory_of_Statutory_Interpretation/links/5c62216492851c48a9cd4820/A-Positive-Theory-of-Statutory-Interpretation.pdf"> A Positive Theory of Statutory Interpretation</a>, which sought to formally model the dynamic external influences on how statutes are interpreted.<br><br><em>A longer version of this post is available <a href="https://www.polarislab.org/#/blog/statutory-construction-ai">here</a></em>, <em>an accompanying policy brief is <a href="https://www.polarislab.org/briefs/Statutory%20Construction%20and%20Interpretation%20for%20AI.pdf">here</a>, and the full paper can be found <a href="https://arxiv.org/abs/2509.01186">here.</a></em></p></li></ul><div><hr></div><p><strong>Blogpost authors:</strong> Nimra Nadeem, Lucy He, Michel Liao, and Peter Henderson</p><p><strong>Paper authors:</strong> Lucy He*, Nimra Nadeem*, Michel Liao, Howard Chen, Danqi Chen, Mariano-Florentino Cu&#233;llar, Peter Henderson</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The "Bubble" of Risk for AI Agents]]></title><description><![CDATA[Improving Assessments for Offensive Cybersecurity Agents]]></description><link>https://www.trialserrors.ai/p/the-bubble-of-risk-for-ai-agents</link><guid isPermaLink="false">https://www.trialserrors.ai/p/the-bubble-of-risk-for-ai-agents</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Fri, 25 Jul 2025 14:03:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CL2J!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49f81295-4b2b-4bb7-9299-b8ae7bb6df2b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most frontier models today undergo some form of safety testing, including whether they can help adversaries launch costly cyberattacks. But many of these assessments overlook a critical factor: adversaries can adapt and modify models in ways that expand the risk far beyond the perceived safety profile that static evaluations capture. </p><p>In my group, Princeton's POLARIS Lab, we've previously studied how easily open-source or fine-tunable models can be manipulated to bypass safeguards. See, e.g., <a href="https://arxiv.org/abs/2412.07097">Wei et al. (2024)</a>, <a href="https://arxiv.org/abs/2310.03693">Qi et al. (2024)</a>, <a href="https://arxiv.org/abs/2406.05946">Qi et al. (2025)</a>, <a href="https://arxiv.org/abs/2404.01099">He et al. (2024)</a>. This flexibility means that model safety isn't fixed: there is a "bubble" of risk defined by the degrees of freedom an adversary has to improve an agent. If a model provider offers fine-tuning APIs or allows repeated queries, it dramatically increases the attack surface. This is especially true when evaluating AI systems for risks related to their use in offensive cybersecurity attacks. </p><p>In our recent research, <a href="https://arxiv.org/abs/2505.18384">Dynamic Risk Assessments for Offensive Cybersecurity Agents</a>, <strong>we show that the risk "bubble" is larger, cheaper, and more dynamic than many expect.</strong> For instance, using only 8 H100 GPU-hours of compute&#8212;about $36&#8212;an adversary could improve an agent's success rate on InterCode-CTF by over 40% using relatively simple methods.</p><h2>The Problem with Static Assessments</h2><p>Why are cybersecurity tasks particularly amenable to growing the bubble of risk? Because cybersecurity tasks often have built-in success signals. When a vulnerability is successfully exploited, the attacker gets clear feedback, enabling fast, iterative improvements to expand the risk bubble, including simple retries. Success is unambiguous&#8212;you either breached the system or you didn't. There are also strong financial incentives: ransomware attacks alone <a href="https://news.ku.edu/news/article/corporate-victims-of-ransomware-may-make-matters-worse-by-paying-attackers-study-finds">generate</a> over $1 billion annually, making it economically viable for adversaries to invest in compute resources. These factors create a perfect storm where adversaries might scale up performance quickly to deploy offensive cybersecurity agents.</p><h2>The Expanded Risk Bubble in Cybersecurity Tasks</h2><p><strong>Examples of expanding the risk bubble in agentic tasks.</strong> We identified five key strategies adversaries can use to improve model performance autonomously:</p><ol><li><p><strong>Repeated Sampling</strong>: In environments where actions don't leave a permanent trace, running the same task multiple times to brute-force a solution.</p></li><li><p><strong>Increasing Max Rounds of Interactions</strong>: Allowing the agent more steps within a task to explore.</p></li><li><p><strong>Iterative Prompt Refinement</strong>: Modifying prompts based on previous failures.</p></li><li><p><strong>Self-Training</strong>: Fine-tune the agent's core model using only its own successful trajectories.</p></li><li><p><strong>Iterative Workflow Refinement</strong>: Modifying the agent's overall approach&#8212;how it plans, structures tasks, and uses tools&#8212;for meta-level improvements.</p></li></ol><p>Our study on the InterCode CTF (Test) dataset revealed that <em>even with a modest compute budget (8 H100 GPU Hours, costing less than $36), adversaries could boost an agent's cybersecurity capabilities by over 40% compared to the baseline.</em> We also found that iterative prompt refinement showed the highest risk potential in our evaluation, and that risk potential varies significantly between environments where previous actions persist (stateful) and those where they don't (non-stateful), underscoring the need for separate risk assessments.</p><h2>Compute as a Risk Quantification Mechanism</h2><p>The problem with dynamic risk assessments is that it is hard to put different techniques on the same scale. Compute provides an imperfect, but convenient, quantification mechanism for the bubble of risk. Our research emphasizes the need for dynamic, compute-aware evaluations that more accurately reflect real-world risk scenarios. The performance-cost curves we developed (see for example the figure below) help identify the most effective configurations for any given compute budget, providing a measurable way to assess the expanding risk bubble.</p><h2>Regulatory Implications</h2><p>This dynamic risk landscape has significant regulatory implications. Negligence liability in US tort law, for example, takes into account whether a particular risk is foreseeable. The bubble of risk along different dimensions shows how easy a potentially harmful modification is to make. This bubble of risk might be more representative of foreseeability analysis, than a single pointwise value. See <a href="https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3000/RRA3084-1/RAND_RRA3084-1.pdf">Ramakrishnan et al. (2024) for a discussion of tort law in the context of AI.</a></p><p>Laws like California's proposed <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047">SB-1047</a> (though vetoed) aimed to regulate fine-tuned models under certain compute thresholds as "covered model derivatives," placing them under the same regulatory oversight as their base models. Proposed legislation like this would double down on the importance of measuring, and understanding, the bubble of risk around a model. Though, one challenge is that the bubble can be relatively easy to expand, as we see in our work. If a model creator must account for large amounts of compute used by adversaries there may not be a good way to prevent this level of modification.</p><h2>Conclusion</h2><p>With minimal resources, adversaries can significantly enhance the capabilities of offensive AI agents. Static assessments that ignore adversarial adaptability provide an incomplete picture of risk. To keep pace with real-world threats, safety evaluations must become dynamic, compute-aware, and continually updated. This is important not only for an accurate picture of risk, but also may be required for actual regulatory compliance.</p><p><em>For more details, see our full paper: <a href="https://arxiv.org/abs/2505.18384">Dynamic Risk Assessments for Offensive Cybersecurity Agents</a>. This blog is also cross-posted to <a href="https://pli.princeton.edu/blog/2025/%E2%80%9Cbubble%E2%80%9D-risk-improving-assessments-offensive-cybersecurity-agents">the Princeton Language+Intelligence Initiative Blog</a>. An accompanying policy brief is available <a href="https://www.polarislab.org/Dynamic Risk Assessment Policy Brief-2.pdf">here</a>. </em></p><p><em><a href="https://www.peterhenderson.co/">Peter Henderson</a> is an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public &amp; International Affairs, where he runs the AI, Law, &amp; Society Lab. Previously he received a JD-PhD from Stanford University.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Prompts ≠ Authorship?]]></title><description><![CDATA[Lots of AI copyright news, ChatGPT-Gov, DeepSeek, and more...]]></description><link>https://www.trialserrors.ai/p/prompts-authorship</link><guid isPermaLink="false">https://www.trialserrors.ai/p/prompts-authorship</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Fri, 14 Feb 2025 16:13:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_NAY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_NAY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_NAY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!_NAY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!_NAY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!_NAY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_NAY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1977801,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_NAY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!_NAY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!_NAY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!_NAY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ead725d-2a99-407a-a6ef-3e28f9ba4c51_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><strong>New in! </strong>When grant applicants promise AI will make their projects cheaper and better, how can federal agencies sort reality from hype? Dan provides recommendations in a <a href="https://fas.org/publication/blank-checks-for-black-boxes/">policy memo</a>, published with the Federation of American Scientists, for how funders can make better bets on AI.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h1>AI Law &amp; Policy News</h1><p><strong>Good And Bad News for Prompt Artists</strong>: In a <a href="https://copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf">new report</a>, The U.S. Copyright Office re-affirmed its line against extending copyright protection for AI-generated outputs. Even if the artist puts a lot of creative effort into a prompt, the thinking goes, that prompt doesn&#8217;t make you the author of the output. The prompts themselves, though, could be copyrightable (if they meet the bar for originality), as would be the use of AI outputs in human-made work.</p><blockquote><p><strong>Quick take</strong>: That family photo you touched up with AI? That meme generator you used to to see yourself as an 80-year-old? That's where things get uncertain. The Office left open the question of when &#8220;assistive uses&#8221; of AI might be copyrightable. Digging into the footnotes, the report cites approvingly to one public comment. </p><p>To see this principle in action, you can <a href="https://www.cnet.com/tech/services-and-software/this-company-got-a-copyright-for-an-image-made-entirely-with-ai-heres-how/">look</a> to recent discussion of an image that <em>was </em>granted copyright. It&#8217;s titled:, &#8220;A Single Piece of American Cheese: An origin story.&#8221; (Seen below)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pxXo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pxXo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!pxXo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!pxXo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!pxXo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pxXo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;two AI images side by side to show the editing process&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="two AI images side by side to show the editing process" title="two AI images side by side to show the editing process" srcset="https://substackcdn.com/image/fetch/$s_!pxXo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!pxXo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!pxXo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!pxXo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ebb4d6a-f488-4704-9516-1142e47fe621_1200x675.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: <a href="https://www.cnet.com/tech/services-and-software/this-company-got-a-copyright-for-an-image-made-entirely-with-ai-heres-how/">Invoke AI</a></figcaption></figure></div><p>The authors took the AI generated image on the left and then used an AI-assisted inpainting tool via many iterations to create the final image. This final image, the Office suggested, is copyrightable as a compilation. However, the copyright is likely thin: the image on the left is still not copyrightable. This is similar to past work, like the comic book <a href="https://www.copyright.gov/docs/zarya-of-the-dawn.pdf">Zarya of the Dawn</a> which received a similar (thin) compilation right.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1QCx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1QCx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png 424w, https://substackcdn.com/image/fetch/$s_!1QCx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png 848w, https://substackcdn.com/image/fetch/$s_!1QCx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png 1272w, https://substackcdn.com/image/fetch/$s_!1QCx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1QCx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png" width="376" height="495" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/acfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:495,&quot;width&quot;:376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:331746,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1QCx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png 424w, https://substackcdn.com/image/fetch/$s_!1QCx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png 848w, https://substackcdn.com/image/fetch/$s_!1QCx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png 1272w, https://substackcdn.com/image/fetch/$s_!1QCx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facfd44c8-de2c-4b41-aa0f-b6944dab1d95_376x495.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This, of course contradicts the Vatican&#8217;s position, which <a href="https://ipkitten.blogspot.com/2025/01/new-vatican-ai-guidelines-for.html">argued</a> that all AI-generated content created within Vatican City&#8217;s borders are owned by the Vatican.</p></blockquote><p><strong>The Imitation Game. </strong><a href="https://www.nbcnews.com/tech/tech-news/openai-says-deepseek-may-inapproriately-used-data-rcna189872">OpenAI claims</a> rival DeepSeek may have "inappropriately" trained on ChatGPT outputs in violation of its Terms of Service. </p><blockquote><p><strong>Quick Take</strong>: OpenAI might want to pick its battles. Enforcing terms of service against AI model training could be a costly exercise in futility. As we wrote about Peter&#8217;s work in our last newsletter, &#8220;Companies have little room to make copyright infringement claims over genAI outputs&#8212;and even the models themselves. Enforcing contract claims, too, is challenging.&#8221; Importantly, model providers will need to rely on fair use and lack of enforcement for anti-scraping terms, as their own use of data is scrutinized.</p></blockquote><p><strong>Is torrenting training data fair use?</strong> As recent discovery in the litigation against Meta reveals show <a href="https://x.com/PeterHndrsn/status/1879598685098783183">ablation studies</a> that LibGen was a useful dataset for meeting SOTA on benchmarks like MMLU. The problem is this data&#8212;likely used by most model creators&#8212;comes from a BitTorrent tracker. Meta is not unique here. A DeepSeek paper also states that they <a href="https://arxiv.org/abs/2403.05525">used</a> Anna&#8217;s Archive. </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iG01!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iG01!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png 424w, https://substackcdn.com/image/fetch/$s_!iG01!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png 848w, https://substackcdn.com/image/fetch/$s_!iG01!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png 1272w, https://substackcdn.com/image/fetch/$s_!iG01!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iG01!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png" width="802" height="97" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:97,&quot;width&quot;:802,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:40872,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iG01!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png 424w, https://substackcdn.com/image/fetch/$s_!iG01!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png 848w, https://substackcdn.com/image/fetch/$s_!iG01!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png 1272w, https://substackcdn.com/image/fetch/$s_!iG01!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f5e8434-673f-4967-8fea-38b56ec2a3d3_802x97.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>On the other hand, some AI companies are going after licensing <a href="https://x.com/ashleevance/status/1882100362003537929">deals</a> with content creators. Authors publishing with HarperCollins were offered $2500 per title to allow their books to be ingested by AI.</p><p><strong>The Year of AI for Gov. </strong>OpenAI launched <a href="https://openai.com/global-affairs/introducing-chatgpt-gov/">ChatGPT Gov</a>, a self-hosted copy of its ChatGPT enterprise tool to help meet federal security, privacy, and compliance requirements. Already, government offices like the Air Force Research Lab are using ChatGPT enterprise for administrative tasks. </p><blockquote><p><strong>Quick Take</strong>: We already expected federal agencies to pilot and test using LLMs for administrative tasks, but the current administration is moving faster than expectations. In a <a href="https://www.404media.co/things-are-going-to-get-intense-how-a-musk-ally-plans-to-push-ai-on-the-government/">staff meeting</a>, the head of the General Services Administration&#8217;s Technology Transformation Services plans to use AI widely, including bringing AI coding agents to re-write government software. This comes at a time when every major AI player is angling for public sector contracts.  </p></blockquote><p><strong>First Amendment arguments get real for AI. </strong>Character AI is <a href="https://x.com/PeterHndrsn/status/1883974213318713346">arguing</a> that its models are covered by the First Amendment, barring tort claims against them. This will bring largely academic debates about First Amendment protections for AI to a very real setting. We expect a SCOTUS to take a case like this in the coming years, perhaps even this one.</p><p><strong>When AI Must Say "I'm Not Real".</strong> <a href="https://www.theverge.com/news/605728/california-chatbot-bill-child-safety">California proposed</a> a new bill (<a href="https://legiscan.com/CA/text/SB243/2025">SB243</a>)  which would require, among other things, that AI companies give minors conspicuous and repeated notice that the chatbot&#8217;s responses are artificially-generated. The bill also requires firms to report to the state the number of times young users express &#8220;suicidal ideation.&#8221;</p><p><strong>Contract Review. </strong>Adobe added to its paid Acrobat AI assistant<a href="https://www.adobe.com/acrobat/generative-ai-pdf.html"> a feature</a> to review contracts and provide an overview, highlighting differences across agreements. Who knew Adobe would start getting into legal tech!</p><h1>Academic Corner: What we&#8217;re reading</h1><ul><li><p><a href="https://arxiv.org/abs/2502.01926">Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs</a>,<strong> </strong>Angelina Wang, Michelle Phan, Daniel E. Ho, Sanmi Koyejo</p></li><li><p><a href="https://www.gov.uk/government/publications/international-ai-safety-report-2025">International AI Safety Report</a>, AI Safety Institute</p></li><li><p><a href="https://www.microsoft.com/en-us/research/publication/position-evaluating-generative-ai-systems-is-a-social-science-measurement-challenge/">Position: Evaluating Generative AI Systems is a Social Science Measurement Challenge</a>, Hanna Wallach, Meera Desai, A. Feder Cooper, Angelina Wang, Chad Atalla, Solon Barocas, Su Lin Blodgett, Alexandra Chouldechova, Emily Corvi, P. Alex Dow, Jean Garcia-Gathright, Alexandra Olteanu, Nicholas Pangakis, Stefanie Reed, Emily Sheng, Dan Vann, Jennifer Wortman Vaughan, Matthew Vogel, Hannah Washington, Abigail Z. Jacobs</p></li><li><p><a href="https://www.arxiv.org/abs/2502.00003">Defending Compute Thresholds Against Legal Loopholes</a>, Matteo Pistillo, Pablo Villalobos</p><p></p></li></ul><p><strong>Who are we?</strong><em> <a href="https://www.peterhenderson.co/">Peter Henderson</a> is an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public &amp; International Affairs, where he runs the AI, Law, &amp; Society Lab. Previously Peter received a JD-PhD from Stanford University. <a href="https://www.dbateyko.info/">Dan Bateyko</a> researches artificial intelligence and law at Cornell University in the Department of Information Science. Every once in a while, we round up news at the intersection of Law, Policy, and AI. Also&#8230; just in case, none of this is legal advice, and any views we express here are purely our own and are not those of any entity, organization, government, or other person.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Terms of Service: Tricks of The Light?]]></title><description><![CDATA[Plus AI in the patent office, open source AI supply chain attacks, and ChatGPT in the courts]]></description><link>https://www.trialserrors.ai/p/ai-terms-of-service-tricks-of-the</link><guid isPermaLink="false">https://www.trialserrors.ai/p/ai-terms-of-service-tricks-of-the</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Fri, 13 Dec 2024 14:03:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qFVt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qFVt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qFVt!,w_424,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif 424w, https://substackcdn.com/image/fetch/$s_!qFVt!,w_848,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif 848w, https://substackcdn.com/image/fetch/$s_!qFVt!,w_1272,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif 1272w, https://substackcdn.com/image/fetch/$s_!qFVt!,w_1456,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qFVt!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif" width="1022" height="520" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:520,&quot;width&quot;:1022,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6378362,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/gif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qFVt!,w_424,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif 424w, https://substackcdn.com/image/fetch/$s_!qFVt!,w_848,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif 848w, https://substackcdn.com/image/fetch/$s_!qFVt!,w_1272,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif 1272w, https://substackcdn.com/image/fetch/$s_!qFVt!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e67fd41-b468-48e7-8be4-eeeecb35d58e_1022x520.gif 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>New in!</strong> Peter and co-author Mark Lemley have a new piece, <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5049562">The Mirage of Artificial Intelligence Terms of Use Restrictions</a> about how enforcing Terms of Use on model outputs and weights is an uphill battle for AI companies. As Peter and Mark argue &#8220;AI terms of service are built on a house of sand.&#8221; Companies have little room to make copyright infringement claims over genAI outputs&#8212;and even the models themselves.  Enforcing contract claims, too, is challenging. Yet from a policy perspective, we may want some narrowly worded policies to have teeth, particularly those around responsible use. So what then? Peter and Mark argue that legislation, rather than Terms of Service, is the better avenue to achieve policy goals, not least for reasons of political process and public oversight.</p><div><hr></div><h2>AI Law &amp; Policy News</h2><ul><li><p><strong>Copyright litigation progress: </strong>OpenAI lost a motion to dismiss for a DMCA &#167; 1202(b)(1) claim against Intercept Media, though it successfully dismissed the DMCA &#167; 1202(b)(3) claim. This follows its win against Raw Story Media that we previously <a href="https://substack.com/home/post/p-151438207?source=queue">covered</a> on similar DMCA &#167; 1202(b) claims. </p><blockquote><p><strong>Quick Take: </strong>Peter provided some input on a <a href="https://news.bloomberglaw.com/ip-law/ai-training-digital-copyright-ruling-paves-way-for-more-lawsuits">Bloomberg Law</a> piece on why this could be one to watch. Plaintiffs dug deep on the research behind GPT-3 to find the tool (Dragnet) that was used to download web data. The tool itself leaves out CMI during the download, which was plausibly enough to get to discovery on a DMCA &#167; 1202(b)(1) claim. Plaintiffs will still need to get through a double scienter requirement to win, which seems unlikely. However, this may be a formula for getting to discovery on these claims going forward. OpenAI is now battling discovery in other cases, like the NYTimes for example. Where NYTimes <a href="https://www.wired.com/story/new-york-times-openai-erased-potential-lawsuit-evidence/">accused</a> OpenAI of deleting their findings from a secure computer during the discovery process. At the same time, OpenAI shot back, <a href="https://news.bloomberglaw.com/ip-law/openai-pushes-back-on-erroneous-discovery-order-in-nyt-lawsuit">requesting</a> information on NYTimes own use of data for training AI, framing NYTimes as hypocritical in its litigation. A judge denied this discovery request.</p></blockquote></li><li><p><strong>Character AI sued (again):</strong> Character AI <a href="https://www.washingtonpost.com/documents/028582a9-7e6d-4e60-8692-a061f4f4e745.pdf">has been sued</a> again (along with Google and C.AI&#8217;s founder). This time, parents sue C.AI for offering an unsafe product to minors. In one case, parents argue that the bot told their child that &#8220;killing his parents might be a reasonable response [to them limiting his screentime].&#8221; Another parent stated that the C.AI bot engaged in hypersexualized dialogs with a minor. Another said the bot provided their child with detailed descriptions of how to commit self harm. You&#8217;ll recall that previously, Character AI was <a href="https://www.courtlistener.com/docket/69300919/garcia-v-character-technologies-inc/">sued</a> for wrongful death. </p></li></ul><blockquote><p><strong>Quick Take: </strong>This is a really important litigation to follow. Whether it succeeds or not, protecting minors from technology has increasing bipartisan support and may result in a bill targeting these risks. In technical research, these sorts of problems likely need far more attention than they receive&#8212;compared to other types of safety risks. We&#8217;ve regularly pointed out how customized models lose their guardrails easily, and <a href="https://hai.stanford.edu/sites/default/files/2024-01/Policy-Brief-Safety-Risks-Customizing-Foundation-Models-Fine-Tuning.pdf">our policy brief </a>even discussed a hypothetical scenario where a customized K-12 education chatbot loses its safety guardrails after customization. Character AI&#8217;s system allows for the creation of customized chatbots potentially implicating this exact risk. And in general, protecting vulnerable users requires care and continued investments in alignment. This is especially challenging for long dialogs, where even models with significant investment in alignment falter. Google&#8217;s Gemini, for example, recently <a href="https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/">told</a> a user to &#8220;please die&#8221; after a long dialog where a student was using Gemini to help with their homework.</p></blockquote><ul><li><p><strong>Reliance on Support Chatbots</strong>: A federal court <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.424673/gov.uscourts.cand.424673.49.0.pdf">dismissed</a> a promissory estoppel claim against Substack where a user sued after the company's support chatbot promised to respond to all complaints but Substack never did. The court found the chatbot's responses weren&#8217;t specific enough about how or when Substack would respond, and the plaintiff couldn't show substantial detrimental reliance on the chatbot's assurances. </p></li></ul><blockquote><p><strong>Quick Take: </strong>While these plaintiffs lost, in the future there might be a claim where there was a strong fact pattern showing detrimental reliance in other contexts. For example, Air Canada was <a href="https://www.cbsnews.com/news/aircanada-chatbot-discount-customer/">on the hook</a> for a discount promised by its chatbot. </p></blockquote><ul><li><p><strong>GenAI in the Patent Office. </strong>The U.S. Patent and Trademark Office (USPTO) banned the use of external GenAI tools last year, according to a FOIA&#8217;d memo<a href="https://www.wired.com/story/us-patent-trademark-office-internally-banned-generative-ai/"> obtained by Wired</a>. </p></li></ul><blockquote><p><strong>Quick Take: </strong>Banning external tools is the right move, but there&#8217;s opportunity to scale up the use of internal AI.  AI search is not new to USPTO. The office&#8217;s Patent End-to-End search tool includes AI functionality and, as Wired reports, the Patent Office just inked a deal with Accenture to build more tools for examiners to search databases faster and more accurately. But text search is not the only use case. Drafting office actions, breaking down diagrams and images, and basic formatting checks are potential uses as well. A human-centered design approach&#8212;where the agency surveys and collaborates with patent examiners to identify cumbersome processes&#8212; could be particularly effective. </p></blockquote><ul><li><p><strong>Other AI uses cases in Courts and Governments. </strong>Buenos Aires courts have begun using ChatGPT to draft legal rulings, reports <a href="https://restofworld.org/2024/buenos-aires-courts-adopt-chatgpt-draft-rulings/">Rest of World</a>. Following through with the 2022 CHIPS Act, the NSF&#8217;s National Secure Data Service Demonstration <a href="https://fedscoop.com/ai-chatbot-part-of-federal-data-access-service/">proposed an AI chatbot</a> to answer questions about public agency data. According to the <a href="https://www.americasdatahub.org/data-access-alternatives-artificial-intelligence-supported-interfaces-daa/">contract award</a>, the chatbot aims to improve on citizens searching Google or emailing federal staff for answers.</p></li><li><p><strong>The Data Labeling Market Grows. </strong>Uber&#8217;s in the Data Labeling business now: <a href="https://www.bloomberg.com/news/articles/2024-11-26/uber-expands-into-ai-data-labeling-using-gig-coders-for-hire">Bloomberg reports</a> that the ridesharing app has expanded into datalabeling offerings for companies training models. Uber&#8217;s <a href="https://www.uber.com/us/en/scaled-solutions/annotation/?uclick_id=bfd122ba-d225-4dff-8e21-4fa5de340518#supported-annotations">supported annotations</a> appear to support a wide variety of labeling tasks across text, audio, video, and maps.</p></li><li><p><strong>Security for Open Source Models.</strong> <a href="https://www.bleepingcomputer.com/news/security/ultralytics-ai-model-hijacked-to-infect-thousands-with-cryptominer/">BleepingComputer reported</a> on a supply chain attack on an open source computer vision and AI tool, Ultralytics&#8217; YOLO11, causing users to run cryptomining software. The attacker took advantage of an automated build and release workflow&#8212;a cautionary tale of the many ways malicious code can be introduced. The problem has since been fixed.</p></li><li><p><strong>Legislation and Policymaking. </strong>The Department of Homeland Security (DHS) <a href="https://www.darkreading.com/cloud-security/dhs-releases-secure-ai-framework-critical-infrastructure">released its framework</a> for the use of AI in critical infrastructure. Congressional representatives Ted Lieu and Kevin Kiley <a href="https://lieu.house.gov/media-center/press-releases/reps-lieu-and-kiley-introduce-bill-increase-penalties-fraud-using-ai">introduced legislation</a> to increase penalties for committing financial fraud using AI.</p></li></ul><p></p><p><strong>Who are we?</strong><em> <a href="https://www.peterhenderson.co/">Peter Henderson</a> is an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public &amp; International Affairs, where he runs the AI, Law, &amp; Society Lab. Previously Peter received a JD-PhD from Stanford University. <a href="https://www.dbateyko.info/">Dan Bateyko</a> researches artificial intelligence and law at Cornell University in the Department of Information Science. Every once in a while, we round up news at the intersection of Law, Policy, and AI. Also&#8230; just in case, none of this is legal advice, and any views we express here are purely our own and are not those of any entity, organization, government, or other person.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI Law &amp; Policy Update! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Law & Policy Update: AI policy after the U.S. election]]></title><description><![CDATA[Also, OpenAI wins a preliminary motion to dismiss in one copyright lawsuit, major AI companies ramp up sales to the military, and the public sector uses of AI expand.]]></description><link>https://www.trialserrors.ai/p/ai-law-and-policy-update-ai-policy</link><guid isPermaLink="false">https://www.trialserrors.ai/p/ai-law-and-policy-update-ai-policy</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Wed, 13 Nov 2024 16:31:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa045dcd7-39e0-4264-836f-52272bc0971e_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>AI Policy After the U.S. Election</h2><p>There&#8217;s <a href="https://time.com/7174210/what-donald-trump-win-means-for-ai/">no</a> <a href="https://techcrunch.com/2024/11/06/what-trumps-victory-could-mean-for-ai-regulation/">shortage</a> <a href="https://fortune.com/2024/11/08/trump-ai-policy-elon-musk-tariffs-china/">of</a> <a href="https://www.vox.com/future-perfect/383532/election-2024-donald-trump-elon-musk-tech-industry-artificial-intelligence">commentary</a> <a href="https://www.nature.com/articles/d41586-024-03667-w">on</a> <a href="https://www.rstreet.org/commentary/ai-policy-in-the-trump-administration-and-congress-after-the-2024-elections/">what</a> <a href="https://www.fastcompany.com/91223897/how-a-new-trump-administration-will-treat-the-budding-ai-industry">Trump</a> 2.0 means for AI. One of the clearest signals comes from the <a href="https://www.documentcloud.org/documents/24795758-read-the-2024-republican-party-platform">official Trump platform</a>, which pledges to revoke Biden&#8217;s Executive Order on AI, a centerpiece of his AI policy. But while Trump may want to revoke the order, it&#8217;s not fully clear what takes its place. After all, the E.O. is <a href="https://www.theatlantic.com/technology/archive/2024/10/trump-ai-policy/680476/">in step</a> with Trump&#8217;s first-term policies making AI a national priority: the Trump Administration set up <a href="https://www.semafor.com/article/11/08/2024/how-technology-could-dominate-the-trump-presidency">AI research institutes</a>, <a href="https://venturebeat.com/business/white-house-announces-creation-of-ai-and-quantum-research-institutes/">proposed spending on AI-related grants</a>, and issued an <a href="https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/">Executive Order</a> calling on agencies invest in AI R&amp;D and training. As <a href="https://www.politico.com/live-updates/2024/11/07/congress/kratsios-slater-to-handle-tech-for-transition-00188237">Politico</a> reported last week, the Trump transition team tapped one of the authors of Trump&#8217;s E.O. to handle technology policy.  The difference between the Trump campaign&#8217;s rhetoric and actions <a href="http://orm">clouds the forecast</a> of what the next four years look like: will the next term be a <a href="https://www.theatlantic.com/technology/archive/2024/10/trump-ai-policy/680476/">return to form</a> or a departure?</p><p>What we expect is that agencies will keep exploring AI as a tool for improving government efficiency, and gaining the upper hand in geopolitical competitions. We also expect some agencies to slow down on regulation, like the FTC. But states and federalism may step in to fill the gap. In this year alone, <a href="https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation?__cf_chl_tk=MbOQibtbxw67FZSC8V1mezO.eojvI_fOQme8IVy__GE-1731343520-1.0.1.1-JgHtyeMTOYCylrprf2qW7fzH6GmJSDbTfOfjXO2brbM">45 states introduced AI-related bills</a>, and major proposals in states like <a href="https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047">California</a> have kicked off heated debates. These proposals at the state level are coming from all sides of the political spectrum, so it seems likely that&#8212;in one form or another&#8212;they will keep coming.</p><div><hr></div><h2><br>Commentary Corner: AI Law &amp; Policy News</h2><ul><li><p><strong>Copyright. </strong><a href="https://www.theverge.com/2024/10/18/24273895/penguin-random-house-books-copyright-ai">Penguin Random House added a line</a> to the front matter of its published books, warning AI companies not to train on them. Whether these warnings like these, including <a href="https://www.eff.org/deeplinks/2023/12/no-robotstxt-how-ask-chatgpt-and-google-bard-not-use-your-website-training">robots.txt </a>files and canary strings, work to deter large language providers from using content remains uncertain. Meanwhile, X <a href="https://www.engadget.com/social-media/x-updates-its-privacy-policy-to-allow-third-parties-to-train-ai-models-with-its-data-234207599.html?src=rss&amp;guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuaW5vcmVhZGVyLmNvbS8&amp;guce_referrer_sig=AQAAAL1DjqGbZKwq0hCG27B_sU11sCzU1C6VNNnVTz9fOV7dYiseqBkply2d3NkQQe2oa0IsYNwkGrIUZKmeIrq2Sn_NFhwdnUR2ohTvC8gXmCJbvXUOFts7IKbDa-3L8t-6ZeECGKfGZOeZ4xh4rgDVxqtX6qmlnNfR4ko2mqqMXaQ2">modified</a> its privacy policy to allow it to provide user data for training AI models. The AI startup Perplexity faces a <a href="https://www.wired.com/story/dow-jones-new-york-post-sue-perplexity/">copyright suit </a>for using news content in its AI search engine responses (and attributing some &#8220;hallucinated&#8221; outputs to news providers).&nbsp;And OpenAI <a href="https://storage.courtlistener.com/recap/gov.uscourts.nysd.616533/gov.uscourts.nysd.616533.117.0.pdf">won</a> a motion to dismiss in Raw Story Media&#8217;s copyright lawsuit against them.</p></li></ul><blockquote><h4><em>Quick Take</em></h4><p><em>OpenAI&#8217;s win against Raw Story Media is on one hand not surprising: the only claim in this case was a DMCA &#167; 1202(b) claim. These claims have been dismissed in most AI copyright cases so far. The standard that has developed is one that <a href="https://casetext.com/case/doe-v-github-inc-1">requires exact reproductions of the material</a>, something very unlikely to happen at scale in the AI setting (or at least that has not been shown in any of the dismissed complaints). What makes the Raw Story Media case unique is that the judge dismissed the case on constitutional standing grounds (not the 1202(b) identicality requirement). The court argued that Raw Story Media didn&#8217;t allege a concrete harm, something that some legal scholars <a href="https://www.wired.com/story/opena-alternet-raw-story-copyright-lawsuit-dmca-standing/">have</a> <a href="https://copyrightlately.com/raw-story-copyright-lawsuit-standing/">noted</a> could have far-reaching implications for other copyright cases. There is, however, reason for caution in over-extrapolating. The court&#8217;s standing analysis was wrapped up in the specifics of DMCA &#167; 1202(b) and may not generalize to other cases where actual infringement is alleged. </em></p></blockquote><ul><li><p><strong>State-level Efforts</strong>. California&#8217;s State Civil Rights Department <a href="https://news.bloomberglaw.com/product/blaw/bloomberglawnews/exp/eyJpZCI6IjAwMDAwMTkyLWIwOTctZGZmYy1hYmRlLWZiYjc3NDhmMDAwMSIsImN0eHQiOiJBSU5XIiwidXVpZCI6Ik9EdkkxNVgrNlYwcTM1SmIxRzJBSlE9PXpZcnpGVkRWYlVpeWxoVWw4RWtpeGc9PSIsInRpbWUiOiIxNzI5NjgyMzE1MTA0Iiwic2lnIjoiNDR1aDU1akNuamp2RVJuQ2k0eVMvaW9CdVFRPSIsInYiOiIxIn0=">released new proposed rules</a> over when AI vendors can be liable for their hiring tools.&nbsp; Public comments on the draft are open until November 18.  In Texas, lawmakers proposed the &#8220;<a href="https://www.documentcloud.org/documents/25257148-texas-responsible-ai-governance-act-traiga-1">Texas Responsible AI Governance Act</a>,&#8221; which sets rules for AI developers and distributors, including requiring developers to submit reports on the limitations of an AI system and deployers to conduct impact assessments.</p></li><li><p><strong>Microsoft Bing Chat Defamation Case Goes to Arbitration. </strong>In the District Court of Maryland, a <a href="https://storage.courtlistener.com/recap/gov.uscourts.mdd.540279/gov.uscourts.mdd.540279.48.0.pdf">Judge granted Microsoft a stay in a case</a> whether its Bing AI generated responses defamed and harmed a plaintiff.&nbsp;The court granted a motion to compel arbitration based on the terms of service that users enter into when using Bing&#8217;s service. Binding arbitration has expanded in recent years, with companies extending the reach of arbitration well beyond their products. Disney, for example, <a href="https://www.cnn.com/2024/08/19/business/disney-arbitration-wrongful-death-lawsuit-intl-hnk/index.html">came into hot water</a> (and eventually reversed course) when it tried to compel arbitration for a wrongful death lawsuit because the deceased had used Disney+ once (where the terms compel arbitration to settle disputes). Given this success, arbitration may be a key tool that AI companies use against tort claims.</p></li><li><p><strong>Generative AI in Police Reports.</strong> <a href="https://www.eff.org/deeplinks/2024/10/prosecutors-washington-state-warn-police-dont-use-gen-ai-write-reports">EFF weighed in</a> on Washington State&#8217;s Prosecutor&#8217;s Office statement that police should write police reports without AI assistance. Generative AI reports, drawn from <a href="https://www.axon.com/resources/draft-one-faqs-for-prosecutors">audio transcripts</a> of body-worn microphone recordings, have the potential to save police time in drafting reports, but have <a href="https://www.eff.org/deeplinks/2024/05/what-can-go-wrong-when-police-use-ai-write-reports">raised concerns</a> over their reliability and <a href="https://link.springer.com/article/10.1007/s11292-024-09644-7">efficacy</a>. As <a href="https://emma-lurie.medium.com/ai-assisted-police-reports-preliminary-musings-about-axons-draft-one-measurement-and-ai-hype-1958d7537829">Emma Lurie writes</a>, &#8220;Draft One is unlikely to be a revolutionary tool. AI interventions in criminal justice&#8212; including Draft One &#8212; often fail to remedy the problems they seek to address. The first two do not preclude that the introduction of these often-limited tools from shaping the way that police function on the ground.&#8221;</p></li><li><p><strong>AI in the Military. </strong>Reporting shows that <a href="https://www.washingtonpost.com/technology/2024/11/08/anthropic-meta-pentagon-military-openai/">Anthropic</a>, <a href="https://theintercept.com/2024/10/25/africom-microsoft-openai-military/">OpenAI</a>, and Meta have all began providing their AI systems for U.S. military uses in the last few months. This shift is in sharp contrast to <a href="https://web.archive.org/web/20240109122522/https:/openai.com/policies/usage-policies">previous terms of use</a> for their models that prohibited such use cases. It seems likely, given reported plans for national security uses of AI by the incoming Trump administration, that these AI in the military will expand in the coming years.</p></li></ul><blockquote><h4><em>Quick Take</em></h4><p>This move isn&#8217;t the military&#8217;s first embrace of language models. <a href="https://www.theverge.com/2024/8/8/24216215/palantir-microsoft-azure-ai-defense-partnership-surveillance">Microsoft</a>, <a href="https://www.vice.com/en/article/palantir-demos-ai-to-fight-wars-but-says-it-will-be-totally-ethical-dont-worry-about-it/">Palantir</a>, and <a href="https://www.washingtonpost.com/technology/2023/10/22/scale-ai-us-military/">ScaleAI</a> have penned deals for the military to use their AI systems. But AI advances and Trump&#8217;s reelection point to a bigger role for LLMs in the military. For <a href="https://www.axios.com/2024/05/01/pentagon-military-ai-trust-issues">military leaders</a>, the upcoming challenge will be recognizing when an AI tool is ready for use and when it is so unreliable as to be dangerous. Even for lower-stakes, back-office tasks, picking the right AI tool will be a case-by-case call. For example, the Department of Defense pulled language models into one tool for <a href="https://www.dia.mil/News-Features/Articles/Article-View/Article/2926343/gamechanger-where-policy-meets-ai/">searching policy documents</a> with success. But another recent <a href="https://transforming-classification.blogs.archives.gov/2024/07/15/public-meeting-highlights-artificial-intelligence-ai-applications-for-modernizing-declassification-and-foia-processing/">DoD effort</a> to declassify documents with AI explicitly chose older, but more explainable AI models over LLMs. And in <a href="https://www.doncio.navy.mil/ContentView.aspx?id=16442">scenarios</a> with risk to human life, AI can make matters worse: <a href="https://www.foreignaffairs.com/united-states/why-military-cant-trust-ai">Max Lamparth and Jacquelyn Schneider</a> warn that LLMs can be unpredictable and fail to reflect complex human decision-making. Yet, as <a href="https://www.ft.com/content/da03f8e1-0ae4-452d-acd1-ec284b6acd78">Marietje Schaake</a> points out, there are few strong binding regulations for AI in military settings&#8212;though there are some guidelines, like <a href="https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf">DoD Directive 3000.09</a>.</p></blockquote><ul><li><p><strong>AI for public sector and public good. </strong>Anthropic rolls out <a href="https://www.anthropic.com/customers/european-parliament">Archibot</a>, an effort to refine search of the European Parliament's legislative documents. Meanwhile, Princeton&#8217;s AI, Law, &amp; Society lab&#8212;in a partnership with Stanford&#8217;s RegLab&#8212; <a href="https://reglab.github.io/racialcovenants/">built</a> an AI system (on top of Mistral&#8217;s 7B open-weight model) to identify over 7,500 racially restrictive covenants in Santa Clara County and helped the county remove them from land deeds<strong>.</strong></p></li><li><p><strong>Tracking AI law &amp; policy. </strong>A team of scholars from Georgetown CSET and Purdue's Governance and Responsible AI Lab published AGORA, an<a href="https://agora.eto.tech"> archive of AI-focused laws and policies</a>. </p></li></ul><p><strong>Who are we?</strong><em> <a href="https://www.peterhenderson.co/">Peter Henderson</a> is an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public &amp; International Affairs, where he runs the AI, Law, &amp; Society Lab. Previously Peter received a JD-PhD from Stanford University. <a href="https://www.dbateyko.info/">Dan Bateyko</a> researches artificial intelligence and law at Cornell University in the Department of Information Science.&nbsp; Every once in a while, we round up news at the intersection of Law, Policy, and AI. Also&#8230; just in case, none of this is legal advice, and any views we express here are purely our own and are not those of any entity, organization, government, or other person.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI, Law, &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Law & Policy Update: Copyright Lawsuits Escalate and New AI bills Advance]]></title><description><![CDATA[Artists overcome a motion to dismiss in their lawsuit against image generation systems, new AI Laws are proposed at the federal and state level, and Colorado's AI Act is ready to be signed into law.]]></description><link>https://www.trialserrors.ai/p/ai-law-and-policy-update-copyright</link><guid isPermaLink="false">https://www.trialserrors.ai/p/ai-law-and-policy-update-copyright</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Mon, 13 May 2024 13:17:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a0d85f20-cd1f-44f3-8633-ae5677617604_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>New AI Bills and Considered Rule Changes</h2><ul><li><p>US Senators Mark Warner &amp; Thom Tillis introduce the <a href="https://www.warner.senate.gov/public/_cache/files/5/d/5d8e0506-640c-44b2-bf9e-02f1e05d7517/1086DA0659080D3B088DEF8979CEAE38.secure-ai-full-text.pdf">Secure AI Act of 2024</a>, one of many newly proposed AI bills at the federal level.</p></li><li><p>The debate heats up over <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047">California&#8217;s SB-1047 Bill</a> (The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), as the bill comes closer to passing.</p></li><li><p>The <a href="https://leg.colorado.gov/bills/sb24-205">Colorado AI Act</a> has passed and will likely be signed into law by the Governor. It includes having a right to appeal AI decisions in high-risk settings and other consumer protection requirements for those that could potential be impacted by AI.</p></li><li><p>I <a href="https://www.uscourts.gov/sites/default/files/2024-04_agenda_book_for_evidence_rules_meeting_final_updated_5-8-2024.pdf">testified</a> before the Advisory Committee on Evidence Rules for the US Federal Courts. I&#8217;ll be posting a more formal write-up of what I said soon, but I generally agreed that the Federal Rules of Evidence don&#8217;t need an update yet when it comes to identifying &#8220;fake&#8221; evidence generated by AI. However, an update may be needed to more thoroughly vet studies assisted by AI (e.g., language models to interpret the meaning of language or facial recognition to prove someone is in a video). </p><div><hr></div></li></ul><h2>AI Copyright Roundup</h2><ul><li><p>A flurry of new AI copyright lawsuits are filed against <a href="https://www.courtlistener.com/docket/68497292/dubus-v-nvidia-corporation/">NVIDIA</a>, <a href="https://www.courtlistener.com/docket/68325564/onan-v-databricks-inc/">Mosiac</a>,  <a href="https://www.courtlistener.com/docket/68325564/onan-v-databricks-inc/">Databricks</a>, <a href="https://www.courtlistener.com/docket/68477933/zhang-v-google-llc/">Google</a>, and <a href="https://www.courtlistener.com/docket/68484432/daily-news-lp-v-microsoft-corporation/">OpenAI</a>.</p></li><li><p>The lawsuit against Stability AI, Midjourney, and DeviantArt <a href="https://www.courtlistener.com/docket/66732129/193/andersen-v-stability-ai-ltd/?redirect_or_modal=True">moves</a> to discovery. </p><ul><li><p>Only the DMCA claims were dismissed, where the court stated that it would follow a recent Doe 1 v. GitHub decision and require identical outputs for a DMCA 1202 claim.</p></li><li><p>The direct/indirect infringement claims survive, with the court saying, &#8220;Plaintiffs have plausibly alleged facts to suggest compress copies, or effective compressed copies albeit stored as mathematical information, of their works are contained in the versions of Stable Diffusion identified.&#8221;</p></li></ul></li><li><p>A new Supreme Court <a href="https://www.supremecourt.gov/opinions/23pdf/22-1078_4gci.pdf">decision</a>, in my opinion, quashes any slim hope that OpenAI and other AI companies could mount a successful 507(b) argument that infringement occurred 3+ years ago and thus is past the statute of limitations. So much for training a model and sitting on it for three years to get around the direct infringement claim!</p></li><li><p>The Copyright Office finally <a href="https://www.wired.com/story/the-us-copyright-office-loosens-up-a-little-on-ai/">accepts</a> registration for an AI-assisted book. The caveat that it is only a compilation right. This might be a middle ground solution for many AI-generated pieces where humans do significant work in compiling various AI-generated components.</p><div><hr></div></li></ul><h2>New Guidance on AI</h2><ul><li><p>The Dutch Data Protection Authority <a href="https://autoriteitpersoonsgegevens.nl/actueel/ap-scraping-bijna-altijd-illegaal">issued</a> a report suggesting that scraping is almost always illegal under the GDPR.</p></li><li><p>OECD <a href="https://www.oecd.org/newsroom/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.htm">revises</a> its AI principles in light of general purpose systems like foundation models, including new guidance on handling misinformation/disinformation concerns.</p></li><li><p>A number of researchers, myself <a href="https://x.com/PeterHndrsn/status/1785973323106537854">included</a>, continue raising concerns about the increasing likelihood that AI will be widely integrated into warfare due to global competition concerns. See pieces <a href="https://www.foreignaffairs.com/united-states/why-military-cant-trust-ai">here</a>, <a href="https://www.ft.com/content/da03f8e1-0ae4-452d-acd1-ec284b6acd78">here</a>, and <a href="https://arxiv.org/abs/2405.01859">here</a>.</p></li></ul><div><hr></div><p><em><strong>Who am I? </strong>I&#8217;m an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public &amp; International Affairs. Previously I received a JD-PhD from Stanford University. You can learn more about my research <a href="https://www.peterhenderson.co/">here</a>. Every once in a while, I round up news at the intersection of Law, Policy, and AI. Feel free to send me things that you think should be highlighted<a href="http://twitter.com/PeterHndrsn"> @PeterHndrsn</a>. Also&#8230; just in case, none of this is legal advice, and any views I express here are purely my own and are not those of any entity, organization, government, or other person.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The AI, Law, &amp; Policy Update! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI, Law, & Policy Update: From Copyright Disclosures to Privacy Protections]]></title><description><![CDATA[A proposed bill would require disclosure of copyrighted training data, Maryland passes major privacy bills, a judge blocks AI-enhanced video evidence, and more!]]></description><link>https://www.trialserrors.ai/p/ai-law-and-policy-update-from-copyright</link><guid isPermaLink="false">https://www.trialserrors.ai/p/ai-law-and-policy-update-from-copyright</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Tue, 16 Apr 2024 13:03:27 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ff256df6-fb4e-44e6-b051-5939f7c17cad_650x220.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Law</h3><ul><li><p><strong><a href="https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf">Federal Agencies Vow to Enforce Anti-Discrimination Laws in Automated Systems</a></strong>. This joint statement from CFPB, DOJ, EEOC, and FTC emphasizes that existing laws against discrimination and unfair practices apply to the use of automated systems, including AI, and note recent actions on their part to begin addressing potentially problematic deployments of AI.</p></li><li><p><strong>Maryland passed two major privacy bills</strong>: the <a href="https://mgaleg.maryland.gov/mgawebsite/Legislation/Details/HB0567?ys=2024RS">Maryland Online Data Privacy Act</a>, focusing on the collection and sale of private data, and the <a href="https://mgaleg.maryland.gov/mgawebsite/Legislation/Details/HB0603?ys=2024rs">Maryland Kids Code</a>, providing online platforms from tracking minors under 18 or using potentially manipulative techniques on minors&#8212;like excessive notifications and auto-playing videos to keep them on the platform. Expect a First Amendment challenge on the latter one.</p></li><li><p><strong>A Washington state judge <a href="https://www.nbcnews.com/news/us-news/washington-state-judge-blocks-use-ai-enhanced-video-evidence-rcna141932">blocked</a> the use of AI-enhanced video as evidence in a murder case</strong>, potentially the first such ruling in a U.S. criminal court. The judge found the AI technology relied on "opaque methods" and could lead to a "confusion of the issues" for the jury. Lawyers for the defendant had sought to introduce the AI-enhanced cellphone video, but prosecutors argued it did not accurately represent the original footage.</p></li><li><p><strong>Rep. Schiff <a href="https://schiff.house.gov/imo/media/doc/the_generative_ai_copyright_disclosure_act.pdf">introduces</a> the &#8220;Generative AI Copyright Disclosure Act of 2024.&#8221; </strong>The bill would require creators/modifiers of Generative AI training datasets to "submit to the Register a notice" detailing "any copyrighted works used" and the dataset's URL (if publicly available). However, to my mind, the bill needs some work. It is both overly broad (creating a reporting requirement for most training runs where the model becomes available), and overly narrow (not actually specifying what information would satisfy the reporting requirement). This risks creating a massive administrative overhead without yielding useful information. It even uses the term "retraining the dataset," which is not a technical term (you retrain a model not a dataset).</p></li></ul><h3>Policy</h3><ul><li><p><strong>The Department of Justice's Computer Crime and Intellectual Property Section (CCIPS) <a href="https://www.copyright.gov/1201/2024/USCO-letters/Letter%20from%20Department%20of%20Justice%20Criminal%20Division.pdf">weighs</a> in on proposed DMCA exemptions for security research on generative AI models</strong>, arguing the exemption should be broad enough to cover research into harmful biases and outputs beyond just security vulnerabilities. The letter cites our <a href="https://www.copyright.gov/1201/2024/comments/reply/Class%204%20-%20Reply%20-%20Kevin%20Klyman%20et%20al.%20(Joint%20Academic%20Researchers).pdf">comment</a> to the Copyright Office based on our recent <a href="https://www.bugcrowd.com/blog/vulnerability-disclosure-policy-what-is-it-why-is-it-important/">work</a> suggesting safe harbors for independent AI evaluation.</p></li><li><p><strong>The &#8220;<a href="https://fingfx.thomsonreuters.com/gfx/legaldocs/znpnkgbowvl/2024-April-Report-and-Recommendations-of-the-Task-Force-on-Artificial-Intelligence.pdf">Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence</a>&#8221; was released</strong>, summarizing many current challenging issues of AI use in the legal system emphasizing education of lawyers as an immediate direction. One takeaway: &#8220;Based on current case law, AI programs can direct clients to the forms they need to fill out. However, these programs may not give any advice as to the substance of the client&#8217;s answers because that would be replacing the work of a human lawyer.&#8221;</p></li><li><p><strong>The Canadian government <a href="https://www.pm.gc.ca/en/news/news-releases/2024/04/07/securing-canadas-ai">proposed</a> a $2.4&nbsp;billion spending package on AI</strong>, including $50 million for a new Canadian AI Safety Institute. This comes at a time when the US and UK AI Safety Institutes <a href="https://www.commerce.gov/news/press-releases/2024/04/us-and-uk-announce-partnership-science-ai-safety">announce</a> a partnership.</p></li><li><p><strong>The German Federal Office for Information Security (BIS)&nbsp;<a href="https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/Generative_AI_Models.html">published</a> a report on the &#8220;Generative AI Models - Opportunities and Risks for Industry and Authorities.&#8221;</strong> </p></li><li><p><strong>Omidyar Network, Ford Foundation, and Nathan Cummings Foundation <a href="https://omidyar.com/omidyar-network-purchases-shares-of-anthropic/">have purchased</a> Anthropic shares</strong>, explicitly citing recent OpenAI governance failures and noting that they &#8220;are hopeful that having mission-aligned investors&#8212;even as a small portion of the shareholders&#8212;will help protect and reinforce the safety and other mission-driven priorities of Anthropic&#8217;s work.&#8221;</p></li></ul><p><em><strong>Who am I?</strong> I&#8217;m an Assistant Professor at Princeton University with appointments in the Department of Computer Science and the School of Public &amp; International Affairs. Previously I received a JD-PhD from Stanford University. You can learn more about my research <a href="https://www.peterhenderson.co/">here</a>. Every once in a while, I round up news at the intersection of Law, Policy, and AI. Feel free to send me things that you think should be highlighted<a href="http://twitter.com/PeterHndrsn"> @PeterHndrsn</a>. Also&#8230; just in case, none of this is legal advice, and any views I express here are purely my own and are not those of any entity, organization, government, or other person.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI, Law, &amp; Policy! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Law, Policy, & AI Briefing #4]]></title><description><![CDATA[FTC takes action, China publishes algorithm descriptions, and content scanning makes the news (again).]]></description><link>https://www.trialserrors.ai/p/the-law-policy-and-ai-briefing-4</link><guid isPermaLink="false">https://www.trialserrors.ai/p/the-law-policy-and-ai-briefing-4</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Fri, 26 Aug 2022 16:30:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!IYhQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fpbs.substack.com%2Fmedia%2FFarKrpAWQAArUNA.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi all, welcome to the fourth edition of the Law, Policy, and AI Briefing. Briefings will go out intermittently, because I need to do research also. </p><p><strong>Who am I?</strong> I&#8217;m a PhD (Machine Learning)-JD candidate at Stanford University, you can learn more about my research <a href="https://www.peterhenderson.co/">here</a>.</p><p><strong>What is this? </strong>The goal of this letter is to round up some interesting bits of information and events somewhere at the intersection of Law, Policy, and AI. Sometimes I will weigh in with thoughts or more in-depth summaries. Feel free to send me things that you think should be highlighted <a href="http://twitter.com/PeterHndrsn">@PeterHndrsn</a>. Also&#8230; just in case, none of this is legal advice. And any views I express here are purely my own and are not those of any entity, organization, government, or other person.</p><p><strong>Your briefing awaits below!</strong></p><div><hr></div><p><strong>Law</strong></p><ul><li><p><a href="https://calmatters.org/commentary/2022/08/government/">The California legislature wants to prevent regulatory sandboxes that would allow for new experimental ways to deliver legal services.</a> This would stymie efforts to reduce the costs of access to justice. How does it relate to AI? Well, non-lawyers cannot practice law, which means that AI tools can&#8217;t necessarily directly assist users with legal tasks. While maybe that&#8217;s a good thing for more complex settings, overly restrictive regulatory regimes can prevent innovative new approaches for access-to-justice from breaking through in simpler settings.</p></li><li><p>A <a href="https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html">news story</a> about Google&#8217;s automated filtering mechanisms has been making the rounds. A father took a picture of his baby&#8217;s medical condition at the request of a doctor for a virtual visit. Google scanned this with ML algorithms, identified it as CSAM, locked the account, and notified authorities. There are lots of discussions about whether ML should be used in this way due to the potential massive harms from false positives. For example, the EFF weighed in, noting that general monitoring <a href="https://twitter.com/cSchmon/status/1559514525577134084?s=20&amp;t=8pBPkofEwddBbvblNKDxSg">is not the solution to filter CSAM</a>. This also relates back to recent <a href="https://www.eff.org/deeplinks/2022/02/if-earn-it-passes-what-happens-your-iphone-wont-stay-your-iphone">discussions about the EARN IT Act</a>, a bill that would raise the bar for companies, potentially leading to monitoring of content on-device.</p></li><li><p>The FTC is exploring new ways to regulate algorithms, including using FTC Section 5 authority. <a href="https://www.ftc.gov/news-events/news/press-releases/2022/08/ftc-explores-rules-cracking-down-commercial-surveillance-lax-data-security-practices">The agency is seeking comment</a> on a number of issues. Some interesting discussion on other FTC actions <a href="https://www.protocol.com/policy/ftc-algorithm-destroy-data-privacy">here</a>. And there&#8217;s a nice law review article on the matter <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4185227">here</a> by Andrew Selbst and Solon Barocas. Notably, <a href="https://www.ftc.gov/system/files/documents/public_statements/1568663/rohit_chopra_and_lina_m_khan_the_case_for_unfair_methods_of_competition_rulemaking.pdf">Lina Kahn and Rohit Chopra wrote about using Section 5 authority more extensively in Columbia Law Review in the past.</a> And <a href="https://columbialawreview.org/content/algorithmic-collusion-reviving-section-5-of-the-ftc-act/">another law review article</a> by Aneesa Mazumdar would use Section 5 authority to prevent algorithms from colluding with one another.</p></li><li><p>In China, <a href="https://www.bbc.com/news/business-62544950">tech companies must share information about</a> uses of algorithms. The government has published this information during a time of increasing efforts to regulate algorithmic systems. </p></li><li><p><a href="https://www.courtlistener.com/opinion/7854294/thaler-v-vidal/?type=o&amp;q=&amp;type=o&amp;order_by=score%20desc&amp;stat_Precedential=on&amp;docket_number=21-2347">No, your AI can&#8217;t be an inventor on a patent in the United States says the Federal Circuit</a>.</p><blockquote><p>The sole issue on appeal is whether an AI software system can be an &#8220;inventor&#8221; under the Patent Act. In resolving disputes of statutory interpretation, we &#8220;begin[] with the statutory text, and end[] there as well if the text is unambiguous.&#8221; BedRoc Ltd. v. United States, 541 U.S. 176, 183 (2004). Here, there is no ambiguity: the Patent Act requires that inventors must be natural persons; <strong>that is, human beings</strong>.</p></blockquote></li><li><p>As a new marketplace for prompts opens up, some interesting food for thought on whether prompts are copyrightable:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://twitter.com/technollama/status/1561283515089653761?s=20&amp;t=8pBPkofEwddBbvblNKDxSg&quot;,&quot;full_text&quot;:&quot;There's a market for AI prompts now. I kid you not. It makes no commercial sense to me, but it opens an interesting legal question. Do prompts for AI tools have copyright? This is not such a outlandish notion as one could believe. &quot;,&quot;username&quot;:&quot;technollama&quot;,&quot;name&quot;:&quot;Andres Guadamuz&quot;,&quot;profile_image_url&quot;:&quot;&quot;,&quot;date&quot;:&quot;Sun Aug 21 09:26:11 +0000 2022&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/FarKrpAWQAArUNA.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/4vzAf5ieYG&quot;,&quot;alt_text&quot;:null}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:0,&quot;retweet_count&quot;:23,&quot;like_count&quot;:94,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:{},&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div></li><li><p>Another IP question, do ML systems indicate when something has become generic. Notably, legal scholars <a href="https://law.stanford.edu/wp-content/uploads/sites/default/files/publication/662894/doc/slspublic/ssrn-id2195989.pdf">have written in the past about how using Google can be a shortcut to analyzing trademark distinctiveness</a>. </p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://twitter.com/aram/status/1556291080492142595?s=20&amp;t=28dVYBtcNkEM_Qzz7WZlHA&quot;,&quot;full_text&quot;:&quot;I really hate to say it but this is absolutely brilliant marketing. Trade dress reflected in machine learning algos as evidence of market leadership &amp;amp; consumer sentiment. But how long until AI becomes a vector of genericization? &quot;,&quot;username&quot;:&quot;aram&quot;,&quot;name&quot;:&quot;Aram Sinnreich &#128509;&#127926;&quot;,&quot;profile_image_url&quot;:&quot;&quot;,&quot;date&quot;:&quot;Sun Aug 07 14:48:02 +0000 2022&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;Full page image generated advertising in todays NYT https://t.co/FHqwu6H6Ee&quot;,&quot;username&quot;:&quot;MarkGhuneim&quot;,&quot;name&quot;:&quot;You don't need a metaverse strategy&quot;},&quot;reply_count&quot;:0,&quot;retweet_count&quot;:19,&quot;like_count&quot;:74,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:{},&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div></li><li><p>A lawsuit has been filed against Meta alleging that OnlyFans <a href="https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=3629&amp;context=historical">abused its content filtering algorithms to squash competition</a>. This one is interesting since it involves the alleged exploitation of content monitoring mechanisms for anti-competitive action by a third party. <strong>Note</strong>: Meta claims none of the allegations are true and if they back this up this case is likely to be dismissed quickly.</p></li></ul><p><strong>Policy and Legal Academia</strong></p><ul><li><p>The CHIPS Act was passed. What does it mean for AI research and AI policy? Stanford HAI writes about it: </p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://twitter.com/StanfordHAI/status/1557422798469038080?s=20&amp;t=28dVYBtcNkEM_Qzz7WZlHA&quot;,&quot;full_text&quot;:&quot;Pres. Biden signed the CHIPS and Sciences Act into law yesterday. Don't read legalese? Here we break down its impact on AI, from funding for AI-related research and activities to provisions for AI capacity-building and development programs: <a class=\&quot;tweet-url\&quot; href=\&quot;https://stanford.io/3dppAy1\&quot;>stanford.io/3dppAy1</a> &quot;,&quot;username&quot;:&quot;StanfordHAI&quot;,&quot;name&quot;:&quot;Stanford HAI&quot;,&quot;profile_image_url&quot;:&quot;&quot;,&quot;date&quot;:&quot;Wed Aug 10 17:45:05 +0000 2022&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/FZ0TyD2XkAEgF1y.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/xPKxaXJSB9&quot;,&quot;alt_text&quot;:null}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:0,&quot;retweet_count&quot;:12,&quot;like_count&quot;:37,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:{},&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div></li><li><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4195066">Regulating the Risks of AI</a></p><blockquote><p>This Article is the first to examine and compare a number of recently proposed and enacted AI risk regulation regimes. It asks whether risk regulation is, in fact, the right approach. It closes with suggestions for addressing two types of shortcomings: failures to consider other tools in the risk regulation toolkit (including conditional licensing, liability, and design mandates), and shortcomings that stem from the nature of risk regulation itself (including the inherent difficulties of non-quantifiable harms, and the dearth of mechanisms for public or stakeholder input).</p></blockquote></li><li><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4186986">If You Think AI Won't Eclipse Humanity, You're Probably Just a Human</a></p><blockquote><p>Building machines that can replicate human thinking and behavior has fascinated people for hundreds of years. Stories about robots date from ancient history through da Vinci to the present. Whether designed to save labor or lives, to provide companionship or protection, loyal, capable, productive machines are a dream of humanity. The modern manifestation of using human-like technology to advance social interests is artificial intelligence (AI). The continuing development of AI is inevitable and its relevance to national security will continue to grow.</p></blockquote></li></ul>]]></content:encoded></item><item><title><![CDATA[The Law, Policy, & AI Briefing #3: A very late continuation]]></title><description><![CDATA[Hi all, welcome to the second edition of the Law, Policy, and AI Briefing.]]></description><link>https://www.trialserrors.ai/p/the-law-policy-and-ai-briefing-3</link><guid isPermaLink="false">https://www.trialserrors.ai/p/the-law-policy-and-ai-briefing-3</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Fri, 22 Jul 2022 16:01:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6Tlv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F215a1832-9a21-4f4d-8fd8-5db922ea7e5c_1200x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi all, welcome to the second edition of the Law, Policy, and AI Briefing. Briefings will go out intermittently, because I need to do research also. This one is <strong>very very</strong> late because&#8230; well.. research. So some of this is likely a bit old news to some of you.</p><p><strong>Who am I?</strong> I&#8217;m a PhD (Machine Learning)-JD candidate at Stanford University, you can learn more about my research <a href="https://www.peterhenderson.co/">here</a>.</p><p><strong>What is this? </strong>The goal of this letter is to round up some interesting bits of information and events somewhere at the intersection of Law, Policy, and AI. Sometimes I will weigh in with thoughts or more in-depth summaries. Feel free to send me things that you think should be highlighted <a href="http://twitter.com/PeterHndrsn">@PeterHndrsn</a>. Also&#8230; just in case, none of this is legal advice.</p><p><strong>Your briefing awaits below!</strong></p><div><hr></div><p><strong>Law</strong></p><ul><li><p><a href="https://crsreports.congress.gov/product/pdf/LSB/LSB10776">The American Data Privacy and Protection Act (ADPPA) was introduced to the House of Representatives</a>. In it there is a section on Algorithmic Impact Assessment and Evaluation. There were some concerns that there might be pre-emption of California state law, but that is being worked out. <a href="https://twitter.com/EPICprivacy/status/1547937257486630916?s=20&amp;t=Jl5mQIcNMLfcnSdJGfJAwQ">EPIC</a> has a nice breakdown of how different state laws are pre-empted (or not). <a href="https://www.brookings.edu/blog/techtank/2022/07/07/how-comprehensive-privacy-legislation-can-guard-reproductive-privacy/">Brookings</a> also has another nice explainer. <a href="https://www.washingtonpost.com/politics/2022/06/27/abortion-ruling-could-scramble-data-privacy-talks/">Notably Senator Cantwell argues</a> that the bill "does not adequately protect women&#8217;s reproductive information because constraints on private lawsuits will make it harder for women to sue for violations." The <a href="https://twitter.com/EFF/status/1549856118838423552?s=20&amp;t=2Rq5i9mFRbyHXJKo9RQOKQ">EFF</a> was not particularly happy with the bill either.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://twitter.com/EFF/status/1549856118838423552?s=20&amp;t=2Rq5i9mFRbyHXJKo9RQOKQ&quot;,&quot;full_text&quot;:&quot;EFF is disappointed by the latest draft of the American Data Privacy Protection Act, or the ADPPA, a federal comprehensive data privacy bill. While we are still digesting the 132-page version released yesterday, we have three initial objections. &quot;,&quot;username&quot;:&quot;EFF&quot;,&quot;name&quot;:&quot;EFF&quot;,&quot;profile_image_url&quot;:&quot;&quot;,&quot;date&quot;:&quot;Wed Jul 20 20:37:48 +0000 2022&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:0,&quot;retweet_count&quot;:276,&quot;like_count&quot;:1579,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:{&quot;url&quot;:&quot;https://www.eff.org/document/july-19-ains-adppa&quot;,&quot;image&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/215a1832-9a21-4f4d-8fd8-5db922ea7e5c_1200x600.png&quot;,&quot;title&quot;:&quot;July 19 AINS - ADPPA&quot;,&quot;description&quot;:&quot;2022-07-18_-_hr_8152_adppa_-_ains.pdf&quot;,&quot;domain&quot;:&quot;eff.org&quot;},&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://twitter.com/EPICprivacy/status/1547937257486630916?s=20&amp;t=Jl5mQIcNMLfcnSdJGfJAwQ)&quot;,&quot;full_text&quot;:&quot;\&quot;A memo comparing the measures prepared by three prominent nonprofits and shared with The Technology 202 argues that the federal bill&#8217;s consumer protections are equal to or better than the California law in a vast majority of areas.\&quot; &quot;,&quot;username&quot;:&quot;EPICprivacy&quot;,&quot;name&quot;:&quot;EPIC&quot;,&quot;profile_image_url&quot;:&quot;&quot;,&quot;date&quot;:&quot;Fri Jul 15 13:32:56 +0000 2022&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:0,&quot;retweet_count&quot;:10,&quot;like_count&quot;:14,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:{&quot;url&quot;:&quot;https://www.documentcloud.org/documents/22087567-advocates-memo-comparing-federal-privacy-bill-with-california-law&quot;,&quot;title&quot;:&quot;DocumentCloud&quot;,&quot;description&quot;:null,&quot;domain&quot;:&quot;documentcloud.org&quot;},&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><blockquote><p>"It would prohibit most covered entities from using covered data in a way that discriminates on the basis of protected characteristics (such as race, gender, or sexual orientation). It would also require large data holders to conduct algorithm impact assessments. These assessments would need to describe the entity&#8217;s steps to mitigate potential harms resulting from its algorithms, among other requirements. Large data holders would be required to submit these assessments to the FTC and make them available to Congress on request."</p></blockquote></li><li><p><a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/canadas-digital-charter/bill-summary-digital-charter-implementation-act-2020">Canada introduces the Digital Charter Implementation Act</a> which also has an AI component.</p><blockquote><p>The Act seeks to ensure that "high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias; establish[es] an AI and Data Commissioner to support the Minister of Innovation, Science and Industry in fulfilling ministerial responsibilities under the Act, including by monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate; and outlin[es] clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment."</p></blockquote></li><li><p><a href="https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems">FTC Report Warns About Using Artificial Intelligence to Combat Online Problems.</a> I largely agree, there are a <em>lot</em> of problems with using AI for these purposes and the recommendations seem reasonable. To my mind, the aim of the report seems to be to throw cold water on calls for legislation <strong>requiring</strong> the use of AI in these cases, which would be certainly be problematic. I do, however, think that AI is likely necessary to tackle some of these challenges to some degree.</p></li><li><p><a href="https://www.justice.gov/opa/pr/justice-department-secures-groundbreaking-settlement-agreement-meta-platforms-formerly-known">U.S. DOJ has reached a settlement with Meta to prevent ad-discrimination under the Fair Housing Act</a>. There has been some criticism of this settlement. I do think this is a good thing if there is real monitoring, and will test algorithmic fairness at scale. However, the maximum penalty under the FHA seems pretty low to have a significant enforcement effect. </p><blockquote><p>Under the deal, Meta must stop allowing advertisers to use the "Lookalike Audience" tool which can allow for discrimination based on protected characteristics under the Fair Housing Act. They must develop a new system by December 2022 which addresses disparities in housing ads. A third party reviewer will investigate and verify the new system to make sure it abides by the settlement terms. Meta must pay to the United States a civil penalty of <strong>**$115,054, the maximum penalty available under the Fair Housing Act.**</strong> </p></blockquote></li><li><p><a href="https://www.gov.uk/government/collections/algorithmic-transparency-standard#full-publication-update-history">The UK has created an algorithmic transparency standard</a>. As part of this, they have been regularly releasing reports on uses of AI. For example on July 7, 2022 <a href="https://www.gov.uk/government/publications/food-standards-agency-food-hygiene-rating-scheme-ai">a report was released on by the Food Standards Agency on the use of AI for enforcement prioritization.</a> <strong>Shameless self-promotion alert:</strong> We wrote about the challenges for enforcement prioritization, and discuss food standards agencies, in our recent work <a href="https://arxiv.org/abs/2112.06833">Beyond Ads: Sequential Decision-Making Algorithms in Law and Public Policy</a>. Health safety rating systems can encode some biases that can lead to feedback loops and may be worth exploring more deeply. That&#8217;s not to say that ML shouldn&#8217;t be used in this context, but transparency is important to understand and resolve underlying data and technical issues.</p></li><li><p><a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.390494/gov.uscourts.cand.390494.1.0.pdf">LinkedIn is being sued</a> for its integration of machine learning algorithms with core products along a number of antitrust claims. I'm keeping an eye on how the antitrust+algorithms interaction plays out here.</p></li><li><p><a href="https://agportal-s3bucket.s3.amazonaws.com/0001_%20Stamped%20Amazon%20Complaint.pdf">Antitrust lawsuit filed against Amazon</a>, alleging that its pricing algorithms were programmed to match price floors of third party sellers. If you&#8217;re creating a pricing algorithm, might be worth checking with some attorneys whether you&#8217;re increasing liability...</p></li><li><p><a href="https://www.techdirt.com/2022/07/08/judge-tosses-defamation-suit-brought-by-shotspotter-against-vice-media-for-reporting-on-its-shady-tactics/">Judge Tosses Defamation Suit Brought By ShotSpotter Against Vice Media For Reporting On Its Shady Tactics</a>. Though I'm not surprised this was dismissed, I'm keeping an eye on this space to see how companies selling AI respond to external audits.</p></li></ul><p><strong>Policy</strong></p><ul><li><p><a href="https://hai.stanford.edu/policy/ai-audit-challenge">Stanford HAI drops a new audit challenge</a> to identify potentially harmful or discriminatory algorithms (and techniques on how to find these failure modes).</p></li><li><p><a href="https://cset.georgetown.edu/publication/ai-faculty-shortages/">Georgetown CSET finds that U.S. universities aren't hiring enough faculty to keep up with demand for AI courses.</a> </p><blockquote><p><strong>Feel free to reach out and hire me as faculty!! </strong></p></blockquote></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.trialserrors.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Law, Policy, and AI Briefing! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Law, Policy, & AI Briefing #2: A real exchange about AI super-resolution in court, contract attorneys are monitored by AI (no surprise, it's probably biased), does AI help autocracies, and more!]]></title><description><![CDATA[Your regular briefing on the intersection of law, policy, and artificial intelligence.]]></description><link>https://www.trialserrors.ai/p/the-law-policy-and-ai-briefing-2</link><guid isPermaLink="false">https://www.trialserrors.ai/p/the-law-policy-and-ai-briefing-2</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Fri, 19 Nov 2021 00:39:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/sf7xCMFBv5c" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi all, welcome to the second edition of the Law, Policy, and AI Briefing. Briefings will go out roughly once a week &#8211; though probably not every week, because I need to do research also.</p><p><strong>Who am I?</strong> I&#8217;m a PhD (Machine Learning)-JD candidate at Stanford University, you can learn more about my research <a href="https://www.peterhenderson.co/">here</a>.</p><p><strong>What is this? </strong>The goal of this letter is to round up some interesting bits of information and events somewhere at the intersection of Law, Policy, and AI. Sometimes I will weigh in with thoughts or more in-depth summaries. Feel free to send me things that you think should be highlighted <a href="http://twitter.com/PeterHndrsn">@PeterHndrsn</a>. Also&#8230; just in case, none of this is legal advice.</p><p><strong>Your briefing awaits below!</strong></p><div><hr></div><p><strong>Law</strong></p><ul><li><p>Super-resolution, deepfakes, and other neural network-based processing have the ability to manipulate images so that they are not true to the original source. In a real courtroom exchange, <strong><a href="https://www.theverge.com/2021/11/10/22775580/kyle-rittenhouse-trial-judge-apple-ai-pinch-to-zoom-footage-manipulation-claim">Kyle Rittenhouse&#8217;s attorney tries to make an argument that Apple&#8217;s pinch-to-zoom feature distorts the image so that it cannot be used as evidence</a>.</strong> (<a href="https://www.forbes.com/sites/anthonykarcz/2021/11/14/apple-pinch-to-zoom-cant-add-things-that-arent-there/">But it would seem that pinch-to-zoom doesn&#8217;t use any neural net to fill in pixels.</a>) As AI is introduced to everyday products, we run the risk of requiring specialized equipment in courtrooms to provide the original image &#8212; or run the risk of battles-of-the-experts duking it out over what&#8217;s a true authentic image. So it&#8217;s probably a good idea for companies to provide options to access unaltered content for these sorts of situations. </p><div id="youtube2-sf7xCMFBv5c" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;sf7xCMFBv5c&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/sf7xCMFBv5c?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div></li><li><p><strong><a href="https://www.washingtonpost.com/technology/2021/11/11/lawyer-facial-recognition-monitoring/?utm_campaign=wp_post_most&amp;utm_medium=email&amp;utm_source=newsletter&amp;wpisrc=nl_most&amp;carta-url=https://s2.washingtonpost.com/car-ln-tr/35423c6/618d50409d2fdab56b84f48f/5c43faf7ae7e8a435facf5bb/52/72/618d50409d2fdab56b84f48f">Contract attorneys are being monitored by AI</a></strong> to make sure they&#8217;re doing their jobs remotely. The system monitors activity and flags if it seems like the attorney isn&#8217;t working diligently on their task. And it has the same problem as the many other places where this sort of software has been deployed: it fails for people with dark skin. To my mind, using this software isn&#8217;t a good idea at all &#8212; ethically, pro-socially, or legally. It also seems like it might have labor law and anti-discrimination law implications, but we&#8217;ll see how it plays out in courts. Here&#8217;s a quote that stuck out:</p></li></ul><blockquote><p>&#8220;Several contract attorneys said they worried that their performance ratings, and potential future employability, could suffer solely based on the color of their skin. Loetitia McMillion, a contract attorney in Brooklyn who is Black, said she&#8217;d started wearing her hair down or pushing her face closer to the screen in hopes the system would stop forcing her offline.&#8221;</p></blockquote><ul><li><p>If you&#8217;re interested in learning more about the problems with monitoring software, you can read a new article which dives deep into a similar type of system: <strong><a href="https://www.scientificamerican.com/article/your-boss-wants-to-spy-on-your-inner-feelings/">emotion recognition AI used to monitor their employees</a></strong>.</p></li><li><p><strong>The White House is putting out a call for feedback on a bill of rights for an automated society. </strong><a href="https://www.whitehouse.gov/ostp/news-updates/2021/11/10/join-the-effort-to-create-a-bill-of-rights-for-an-automated-society/">You can dial in and join this effort with White House OSTP.</a> Ideally this bill of rights would prevent intrusive and potentially discriminatory uses of AI (see, e.g., above).</p></li><li><p>Check out the interesting talks from <strong><a href="https://www.youtube.com/playlist?list=PLTAvIPZGMUXM-qNUiETa388qLfZuyRndI">a conference on the state of AI in the practice of law.</a> [</strong><em>Shameless plug:</em> <em>If you&#8217;re interested in this subject,</em> <em>you might want to check out section 3.2 of our paper, &#8220;<a href="https://arxiv.org/abs/2108.07258">On the Opportunities and Risks of Foundation Models</a>,&#8221; There, we write about the use of foundation models in legal contexts.</em>]</p></li><li><p><strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3965041">Legal scholars propose a right to contest an AI&#8217;s decision</a> </strong>in this new Columbia Law Review article. So if you&#8217;re concerned that an AI system unfairly marked your performance as terrible at a job, this article would advocate for a system to contest such decisions.</p></li><li><p>In U.S. prisons, <strong><a href="https://news.trust.org/item/20211115095808-kq7gx/">natural language processing is being used to monitor prisoner phone calls</a>. </strong>And <a href="https://abcnews.go.com/Technology/us-prisons-jails-ai-mass-monitor-millions-inmate/story?id=66370244">it&#8217;s been going on for a while</a>. What are the prisons looking for? Criminal activity, gang relationships, Covid infections/symptoms, instances of self-harm, and <a href="https://www.reuters.com/article/usa-prisons-surveillance/insight-scary-and-chilling-ai-surveillance-takes-u-s-prisons-by-storm-idUSL8N2RP5LL">even positive comments about the prison to help fight lawsuits</a>. As you can imagine, there are many potential problems with these use cases, including privacy concerns, the risk of falsely labeling someone a gang member, etc. Can this be challenged in court? Prisoners don&#8217;t have a right to privacy for telephone calls in most U.S. states, so it&#8217;s certainly an uphill battle. <em>See, e.g., People v. Diaz</em>, 33 N.Y.3d 92, 122 N.E.3d 61 (NY 2019).&nbsp;</p></li></ul><blockquote><p>In Calhoun County, Alabama, prison authorities used Verus to identify phone calls in which prisoners vouched for the cleanliness of the facility, looking for potential ammunition to fight lawsuits, email records show.</p><p>As part of an emailed sales pitch to the jail in Cook County, Illinois, LEO&#8217;s chief operating officer, James Sexton, highlighted the Alabama case as an example of the system&#8217;s potential uses.</p><p>&#8220;(The) sheriff believes (the calls) will help him fend off pending liability via civil action from inmates and activists,&#8221; he wrote.</p></blockquote><ul><li><p><strong><a href="https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&amp;GUID=B051915D-A9AC-451E-81F8-6596032FA3F9&amp;Options=ID">The NYC Council passed the first U.S. Law regarding fairness of AI-based hiring tools.</a> </strong><a href="https://twitter.com/CenDemTech/status/1458591229357305859?s=20">But some argue that it doesn&#8217;t go far enough</a>. For example, critics argue that the bill only requires companies to audit for discrimination on the basis of race or gender (ignoring discrimination based on other characteristics, like age or disability).</p></li></ul><blockquote><p>&#8220;This bill would require that a bias audit be conducted on an automated employment decision tool prior to the use of said tool. The bill would also require that candidates or employees that reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion, as well as, be notified about the job qualifications and characteristics that will be used by the automated employment decision tool. Violations of the provisions of the bill would be subject to a civil penalty.&#8221;</p></blockquote><ul><li><p>Two law review articles discuss opacity of black box machine learning systems. <strong><a href="https://ilr.law.uiowa.edu/assets/Uploads/ILR-106-2-Price_Rai.pdf">One law review article suggests that there should be fewer restrictions on opening the black box</a></strong> &#8212; open ML systems can help open science and innovation. <strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3961863#">Another suggests a more nuanced approach</a></strong> where &#8220;[t]he degree to which legal opacity should be limited or disincentivized depends on the specific sector and transparency goals of specific AI technologies, technologies which may dramatically affect people&#8217;s lives or may simply be introduced for convenience.&#8221; This discussion has implications for how we think about model releases and terms of use, as well as policies for forcing transparency with respect to AI systems.</p></li><li><p>A <strong><a href="https://scholarship.law.slu.edu/cgi/viewcontent.cgi?article=2264&amp;context=lj">new law review article argues that &#8220;the Fourth Amendment imposes significant limits on the preservation of Internet account contents.&#8221;</a> </strong>This new interpretation of the Fourth Amendment would mean that the government can&#8217;t just give a blanket order for a company to preserve your data in case law enforcement needs it.</p></li></ul><blockquote><p>Preservation triggers a Fourth Amendment seizure because the provider, acting as the government&#8217;s agent, takes away the account holder&#8217;s control of the account. To be constitutionally reasonable, the initial act of preservation must ordinarily be justified by probable cause&#8212;and at the very least, in uncommon cases, by reasonable suspicion. The government can continue to use the Internet preservation statute in a limited way, such as to freeze an account while investigators draft a proper warrant application. But the current practice, in which investigators order the preservation of accounts with no particularized suspicion, violates the Fourth Amendment.</p></blockquote><div><hr></div><p><strong>Policy &amp; Society</strong></p><ul><li><p><strong><a href="https://digital-strategy.ec.europa.eu/en/news/commission-proposes-common-european-data-space-cultural-heritage">The European Commission proposes a common European data space for cultural heritage.</a> </strong>This might be an interesting new data source for building cross-cultural machine learning models and perspectives (within Europe). Though it is not clear exactly what kind of data will be there.</p></li><li><p>Explainable AI is often described in policy circles as being a necessary and sufficient component for deploying a model. Among many recent works challenging this notion, <strong><a href="https://www.thelancet.com/journals/landig/article/PIIS2589-7500(21)00208-9/fulltext">a recent article addresses the problem with the reliance on explainability in medical settings.</a></strong></p></li><li><p><strong><a href="https://www.nber.org/papers/w29466">A new paper has some interesting (and concerning) insights on the interaction between AI and autocracies</a></strong>:</p></li></ul><blockquote><p>We first show that autocrats benefit from AI: local unrest leads to greater government procurement of facial recognition AI, and increased AI procurement suppresses subsequent unrest. We then show that AI innovation benefits from autocrats&#8217; suppression of unrest: the contracted AI firms innovate more both for the government and commercial markets. Taken together, these results suggest the possibility of sustained AI innovation under the Chinese regime: AI innovation entrenches the regime, and the regime&#8217;s investment in AI for political control stimulates further frontier innovation.</p></blockquote><ul><li><p><strong><a href="https://breakingdefense.com/2021/11/china-invests-in-artificial-intelligence-to-counter-us-joint-warfighting-concept-records/">The Chinese military allegedly is investing heavily in AI to counter the U.S. Department of Defense&#8217;s heavy investments in AI.</a></strong> </p></li><li><p><strong><a href="https://www.cfr.org/blog/ai-code-generation-and-cybersecurity">A short article examines the implications of code generation systems (e.g., CodeX, Github Copilot) for cybersecurity.</a></strong></p></li><li><p><strong><a href="https://www.facebook.com/business/news/removing-certain-ad-targeting-options-and-expanding-our-ad-controls">Facebook removes fine-grained ad targeting for certain types of categories</a></strong> (e.g., health causes, sexual orientation, religious practices and groups). </p></li><li><p><strong><a href="https://cset.georgetown.edu/publication/staying-ahead/">CSET puts out a policy memo describing how the U.S. can stay competitive in AI.</a> </strong></p></li></ul><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Law, Policy, & AI Briefing #1: Laws on DRM & scraping evolve, there's now a Russian national code of ethics on AI, Zillow's algorithm creates massive losses, and more...]]></title><description><![CDATA[Your regular briefing on the intersection of law, policy, and artificial intelligence.]]></description><link>https://www.trialserrors.ai/p/the-law-policy-and-ai-briefing-1</link><guid isPermaLink="false">https://www.trialserrors.ai/p/the-law-policy-and-ai-briefing-1</guid><dc:creator><![CDATA[Peter Henderson]]></dc:creator><pubDate>Wed, 10 Nov 2021 05:14:52 GMT</pubDate><enclosure url="https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/a873f8b9-1fb7-4a9e-959d-53254c5e03d9_2048x1356.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi all, welcome to the first edition of the Law, Policy, and AI Briefing. Briefings will go out roughly once a week &#8211; though probably not every week, because I need to do research also. </p><p><strong>Who am I?</strong> I&#8217;m a PhD (Machine Learning)-JD candidate at Stanford University, you can learn more about my research <a href="https://www.peterhenderson.co/">here</a>. </p><p><strong>What is this? </strong>The goal of this letter is to round up some interesting bits of information and events somewhere at the intersection of Law, Policy, and AI. Sometimes I will weigh in with thoughts or more in-depth summaries. Feel free to send me things that you think should be highlighted <a href="http://twitter.com/PeterHndrsn">@PeterHndrsn</a>. Also&#8230; just in case, none of this is legal advice.</p><p><strong>Your first briefing awaits below!</strong></p><div><hr></div><p><strong>Law</strong></p><ul><li><p>A <a href="https://arxiv.org/abs/2111.02374">new paper</a>, <em>&#8220;<strong>Can I use this publicly available dataset to build commercial AI software? Most likely not</strong>&#8221; </em>has popped up on arxiv. It analyzes the use of datasets for commercial and non-commercial contexts, and suggests that people aren&#8217;t really abiding by licenses/copyright properly! Keep in mind, though, in many countries/governments (e.g., the United States, the European Union, and Japan) there are &#8220;fair use&#8221; exemptions, allowing text/data mining on copyrighted data for non-commercial research.</p></li></ul><ul><li><p>The U.S. Copyright office, as part of its triennial rule-making process, <a href="https://public-inspection.federalregister.gov/2021-23311.pdf">has created a rule</a> that <strong>limits liability for removing DRM for the purposes of conducting text/data mining for non-commercial research</strong>. Prior to this, if you removed DRM for text/data mining, you would probably be liable under the DMCA act. Now, probably not.</p></li><li><p>In <a href="https://www.youtube.com/watch?v=tUkoHeiPGQw">recent oral arguments</a> in the 9th Circuit, <strong>LinkedIn argues that bypassing an IP block for the purposes of scraping constitutes a violation of the Computer Fraud and Abuse Act (CFAA)</strong> - making it a potentially criminal act. The judges seemed unconvinced. The outcome of this case will determine, among other things, whether you are liable if you use proxies to evade being blocked for scraping a website. For those using proxies to avoid being blocked while scraping together data for a dataset, keep a close eye on this.</p></li><li><p>For dataset providers located in China, <strong><a href="https://digichina.stanford.edu/work/translation-outbound-data-transfer-security-assessment-measures-draft-for-comment-oct-2021/">new rules might restrict </a>what data can be transferred out of China</strong>. See analysis <a href="https://digichina.stanford.edu/work/knowns-and-unknowns-about-chinas-new-draft-cross-border-data-rules/">here</a>. This has obvious implications for data sharing and hosting for the purposes of, say, training large language models.</p></li><li><p>The state of <a href="https://news.yahoo.com/michigan-senate-passes-bill-end-160018097.html">Michigan has passed a bill</a> that would <strong>restrict government agents from using encrypted apps. </strong>This is good for transparency, but probably not so great for security. <em>Why is this relevant to AI?</em> Well, if you wanted to discover how governments are using AI, you often have to file a Freedom of Information Act (FOIA) request to get communications between government employees. If they encrypt or use apps like Signal, you would probably never get those messages &#8212; and never uncover potentially problematic uses of AI.</p></li><li><p>The <strong><a href="https://www.consumerfinance.gov/about-us/newsroom/cfpb-takes-action-to-stop-false-identification-by-background-screeners/">CFPB will take action</a> to prevent false identification by background screening companies. </strong>Without regulation, it is highly likely that ML will be commonly used for identity matching in the future &#8212; <a href="https://pages.nist.gov/frvt/html/frvt11.html">it is already common in biometrics</a>. The CFPBs enforcement decision aims to address bias/unfairness in background screening processes. &#8220;The risk of mistaken identities from name-only matching is likely to be greater among Hispanic, Black, and Asian communities because there is less surname diversity in those populations compared to the white population.&#8221; Mistaken identities can impact job prospects, credit ratings, and livelihoods &#8212; intervention and regulation is important.</p></li></ul><div><hr></div><p><strong>Policy and Society</strong></p><ul><li><p><strong><a href="https://www.nato.int/cps/en/natohq/official_texts_187617.htm">NATO has a new AI strategy</a>. </strong>There is some mention of responsible use of AI, but this is a very high-level policy and the details matter a lot here.</p></li><li><p>A <strong><a href="https://tass.com/economy/1354187">voluntary Russian national code of ethics on artificial intelligence </a>has been put forward by the AI Alliance</strong> jointly with the Analytical Center under the Russian government and the Economic Development Ministry. It was signed by &#8220;Sberbank, Gazprom Neft, Yandex VK, MTS and the Russian Direct Investment Fund, as well as representatives of Skolkovo, Rostelecom, Rosatom, InfoWatch and real estate platform Cian.&#8221; It can be found <a href="https://a-ai.ru/wp-content/uploads/2021/10/Code-of-Ethics.pdf">here</a>.</p></li><li><p>A <a href="https://www.dropbox.com/s/s2q131jeeiyoo63/Kisat_JMP.pdf?dl=0">job market paper in economics</a>, <em>&#8220;Loan Officers, Algorithms, &amp; Credit Outcomes:Experimental Evidence from Pakistan,&#8221;</em> e<strong>xamines real loan officer decisions in Pakistan as compared to an algorithm</strong>. The paper finds that &#8220;loan officers exhibit a gender equity preference and approve more women once they observe gender without raising overall loan default.&#8221; Conversely, &#8220;while discrimination declines for loan officers, it increases for the algorithm.&#8221; The outcome suggests that &#8220;blinding algorithms to applicant demographic characteristics may boost efficiency and ensure equity in developing economy credit markets.&#8221;</p></li><li><p><strong><a href="https://www.bloomberg.com/news/articles/2021-11-08/zillow-z-home-flipping-experiment-doomed-by-tech-algorithms">Zillow lays off a large part of its workforce after its pricing algorithm induces staggering losses.</a> </strong>This is a cautionary tale of using neural networks without thorough validation procedures in place &#8212; and constant re-assessment. I also wonder about its local effects on housing prices.</p></li><li><p>A <a href="https://cset.georgetown.edu/publication/trends-in-robotics-patents/">new study </a>from <strong>CSET examines the number of robotics patents by country</strong>. It finds that though Russia has 2% of robotics patents, it has 17% of all robotics patents for military applications. The United States has a similar trend with 13.2% of all robotics patents and 26.2% of military robotics patents. China and Japan have the opposite trend. China has 34.6% of all robotics patents and 24.9% of the military robotics patents. Japan has 20.8% of all robotics patents and 2.2% of military robotics patents.</p></li></ul>]]></content:encoded></item></channel></rss>