You searched for Artificial Intelligence | 天美传媒; Lardner LLP / Legal services in Boston, Massachusetts Tue, 26 Aug 2025 20:23:35 +0000 en-US hourly 1 /wp-content/uploads/2024/11/cropped-Foley-Favicon-1-32x32.png You searched for Artificial Intelligence | 天美传媒; Lardner LLP / 32 32 California Courts Announce New AI Regulations /p/102l07z/california-courts-announce-new-ai-regulations/ Mon, 18 Aug 2025 18:47:49 +0000 On July 18, 2025, California's Judicial Council approved聽a set of rules for integrating generative AI into judicial operations.聽With the...

The post California Courts Announce New AI Regulations appeared first on 天美传媒; Lardner LLP.

]]>
On July 18, 2025, California’s Judicial Council approved  for integrating generative AI into judicial operations. With the adoption of  and  specifically, the courts are looking to put into place the country’s first broad framework for generative AI use in court procedures. These new guidelines are expected to go into effect .

Under Chief Justice Patricia Guerrero in 2024, a task force with an aim to balance innovation with caution and ensure AI efficiency without compromising trust, propelled these rules forward. With these new rules, all courts in the state using AI must outline clear policies with a focus on confidentiality, bias, and accuracy by December 15, 2025. 

There has been much media and private discussion on why these guardrails are critical. While AI can streamline tasks like looking through case law, drafting memos, summarizing briefs, and saving time for the court employees, there are many risks that still exist. These risks include a variety of issues, such as data breaches or biased outputs. To address this, the rules do not , such as driver’s licenses, into public AI tools. It also mandates human review of all AI-generated outputs and requires clear labeling of AI-created public content. 

According to the ABA Journal, each court鈥檚 policy must 鈥減rohibit the entry of confidential, personal identifying, or other nonpublic information into a public generative AI system,鈥 as well as 鈥渞equire disclosure of the use of or reliance on generative AI if the final version of a written, visual, or audio work provided to the public consists entirely of generative AI outputs.鈥

The rules don’t allow AI to make decisions or act autonomously as human oversight ensures AI supports, not supplants, judicial expertise.

By automating tasks that are viewed as repetitive in nature, AI quickens case resolutions and reduces workloads. However, the rules acknowledge AI’s limitations, particularly around bias. AI systems, trained on historical data, can, without knowing, increase societal inequities if unchecked. California’s framework requires courts to balance the acts of being proactive in preventing discriminatory use, while at the same time, maximizing AI’s strengths and mitigating its risks.

Transparency, accuracy, privacy, and security are other highlighted areas. AI-generated documents or opinions made public have to be disclosed as such if we are going to take steps towards reinforcing the legal system’s foundation of trust.

As the largest state to adopt such a comprehensive framework for AI use in its courts, this policy is positioned as a potential national blueprint. California鈥檚 policy could set a standard for responsible AI use, and more states, such as New York, are exploring their own AI rules. 

By outlining clear, ethical guidelines, California is leading the way for a judiciary that’s faster, fairer, and more accessible.

The post California Courts Announce New AI Regulations appeared first on 天美传媒; Lardner LLP.

]]>
CHAI Health Law & AI Symposium /insights/events/2025/09/chai-health-law-ai-symposium/ Tue, 26 Aug 2025 20:22:49 +0000 The post CHAI Health Law & AI Symposium appeared first on 天美传媒; Lardner LLP.

]]>

The post CHAI Health Law & AI Symposium appeared first on 天美传媒; Lardner LLP.

]]>
Big Tech Looks to AI Startups to Secure Talent /p/102l0h1/big-tech-looks-to-ai-startups-to-secure-talent/ Wed, 20 Aug 2025 20:40:35 +0000 What is the Impact for Silicon Valley Innovation? Acquihires have been a part of Silicon Valley for quite some time now. Larger tech...

The post Big Tech Looks to AI Startups to Secure Talent appeared first on 天美传媒; Lardner LLP.

]]>
What is the Impact for Silicon Valley Innovation?

Acquihires have been a part of Silicon Valley for quite some time now. Larger tech companies come in and acquire startups mainly for their talent, as opposed to their products or technology. However, starting with ex-FTC chair Lina Khan鈥檚 subpoenas to big tech companies probing their prior unreported small acquisitions, big tech companies were frozen out of the M&A market since early 2021. As new AI startups were formed, they couldn鈥檛 exit and had trouble raising follow-on rounds of capital. 

Then, starting with Microsoft鈥檚 licensing deal with Inflection AI, we saw a new kind of 鈥渁cquihire鈥 designed to circumvent the regulatory shutdown.  Whereas traditional acquihires resulted in return of capital to startup investors and founders, now there is a new trend taking shape in Silicon Valley. Major players in big tech have begun hiring away top AI talent in what the has termed the 鈥渞everse acquihire.鈥 

These aren鈥檛 acquisitions of the startup or the entire team to acquire talent, but rather a poaching of their founders and AI researchers, sometimes packaged with small dollar licensing of the startup鈥檚 technology, but which does not result in return of meaningful capital to investors.  So, what happens to the remaining business? While this provides big tech with an alternative to bring in the talent they need, it also means that AI leaders and top talent are leaving their companies behind, creating what calls 鈥渮ombie startups.鈥

The driving force behind this on the big tech side is the immediate need for talent, combined with a way to circumvent regulatory hurdles. The WSJ says companies see this stage in AI development as a 鈥渙nce in a generation opportunity,鈥 and that means they need top talent quickly to capitalize on the moment.  Additionally, as we have written about previously, acquihires provide an easier route to bring on talent without the regulatory and integration issues of a traditional acquisition. There is also great financial incentive for the founders and researchers being lured away from startups, so it seems like it鈥檚 a win-win situation. But, what about the remaining business and the startup鈥檚 investors?

In the traditional Silicon Valley startup model, these startups would be looking down a path to a major exit event, but instead, they are losing those driving the company forward. Those who leave are seeing the big payday, but not necessarily those who stay or those who invested. CNBC cites tech investors and startup employees as indicating that this trend 鈥渢hreatens to thwart innovation as founders abandon their ambitious projects to work for the biggest companies in the world.鈥

Could this significantly impact the traditional startup model for Silicon Valley? If the trend continues, it very well could. Future employees could see startups as too risky, or investors could become more hesitant to put their money into a startup thinking that the founders might leave. And while big tech companies may secure the talent they need today, the long-term cost could be a weakening of the very startup pipeline that has historically driven disruptive breakthroughs. 

For founders and researchers, there is the allure of the immediate financial reward, but it also poses a risk to the other employees, investors, and the broader innovation pipeline. Ultimately, it will be important to preserve the independent startup in Silicon Valley and to balance the short-term race for AI talent with the long-term need to sustain entrepreneurial ambition.

How do we incentivize founders to keep building?  How do we incentivize venture funds to continue allocating capital to startups? It鈥檚 not by looking at one blockbuster IPO from FIGMA and declaring mission accomplished (reference to Lina Khan鈥檚 recent case of schadenfreude), but rather, cutting off the regulatory handcuffs put in place by the recently deposed and r to enable an IPO market.  what Silicon Valley really needs鈥

The post Big Tech Looks to AI Startups to Secure Talent appeared first on 天美传媒; Lardner LLP.

]]>
Securing Value: Patent Strategies for AI-Accelerated Drug Repurposing /p/102l0hd/securing-value-patent-strategies-for-ai-accelerated-drug-repurposing/ Thu, 21 Aug 2025 15:35:45 +0000 Drug development can be slow, costly, and risky, often taking more than a decade and billions of dollars to bring a single therapy to the...

The post Securing Value: Patent Strategies for AI-Accelerated Drug Repurposing appeared first on 天美传媒; Lardner LLP.

]]>
Drug development can be slow, costly, and risky, often taking more than a decade and billions of dollars to bring a single therapy to the market. Drug repurposing offers a faster, lower-risk path to new therapies by leveraging the established safety and efficacy profiles of existing drugs. Nevertheless, many repurposing programs fail due to low clinical efficacy or unexpected toxicities, and the process often remains labor-intensive and time-consuming. A recent review by explains different approaches for using artificial intelligence (AI) to accelerate drug repurposing and overcome some of the main challenges of this process. While this scientific progress is crucial, repurposed drug candidates must also overcome significant regulatory and patent challenges for successful commercialization.

Wan et al. emphasize that AI can leverage large-scale biological and clinical datasets to identify new therapeutic uses for existing drugs. By integrating various data types, like transcriptomic and proteomic profiles, drug-target interaction databases, and real-world evidence from electronic health records, AI models can evaluate drug candidates across a wide range of metrics. Drug repurposing pipelines based on such AI models can reveal hidden on- or off-target effects and predict novel drug鈥揹isease associations to dramatically accelerate the search for viable candidates. Key obstacles to the commercial success of AI-enabled repurposing of drugs include data reliability, clinical validation, and regulatory hurdles. However, recent developments suggest that AI is rapidly transforming drug repurposing from being dependent on serendipity into a systematic and data-driven discipline.

Repurposed drugs also face significant patent protection obstacles because their core molecules are already public. As I explored in a previous article on patent strategies for repurposed drugs, successful protection requires a multi-layered approach, including:

  • Methods-of-use patents (covering new indications, dosing regimens and routes, and/or patient subpopulations)
  • Formulation patents (extended-release, injectable depot, or novel delivery mechanisms)
  • Combination patents (synergistic pairing with another therapeutic agent)

Strategic patent protection can transform repurposed drugs and associated innovations into valuable assets. Aligning AI innovation with a thoughtful IP strategy will not only safeguard value but also ensure that scientific breakthroughs in drug repurposing translate into market success.

The post Securing Value: Patent Strategies for AI-Accelerated Drug Repurposing appeared first on 天美传媒; Lardner LLP.

]]>
AI Fair Use Decisions Bode Well for the Semiconductor Industry /p/102l0r2/ai-fair-use-decisions-bode-well-for-the-semiconductor-industry/ Mon, 25 Aug 2025 14:48:09 +0000 Summary judgment was recently granted for defendants based on fair use in two copyright infringement actions challenging the training of...

The post AI Fair Use Decisions Bode Well for the Semiconductor Industry appeared first on 天美传媒; Lardner LLP.

]]>
Summary judgment was recently granted for defendants based on fair use in two copyright infringement actions challenging the training of large language models (LLMs), one against Meta relating to its Llama LLMs,[1] and the other against Anthropic relating to its Claude LLMs.[2] The decisions bode well for the continued development of the generative AI industry, and therefore for the semiconductor industry, which is building out the infrastructure and higher layers of the generative AI tech stack.

In both cases, authors challenged the unauthorized downloading of their copyrighted works and their copying and use for training LLMs, and in Anthropic鈥檚 case, also the creation of a general-purpose digital library. Neither case involved challenges to the LLMs鈥 outputs.       

LLM Training

Training of an LLM involves the use of an enormous number of texts (including, for Claude and Llama, millions of books), which are copied in a multistep process that starts with each text being translated into short sequences of words and punctuation called 鈥渢okens,鈥 which are the units on which training is performed. Training then involves the use of a statistical language model to learn patterns from these 鈥渢okenized鈥 texts, including predicting the next word in a sequence, given the context from the preceding words, and then repeating the process.  The prediction is compared to the original, and the statistical model is accordingly adjusted so that next time it is more likely to predict correctly.  The statistical language model operates through the use of 鈥渧ectors,鈥 which are a sort of multi-dimensional matrix that captures the relatedness (called 鈥渨eights鈥) of different words, grammar patterns, or story themes. At a general level, the Anthropic court described training as using the authors鈥 works to 鈥渋teratively map statistical relationships between every text-fragment and every sequence of text-fragments so that a completed LLM could receive new text inputs and return new text outputs as if it were a human reading prompts and writing responses.鈥  

Copyright Law and Fair Use

The policy behind the 1976 Copyright Act is to promote the progress of science and the arts through encouraging authors to create new creative works. Section 106 of the 1976 Copyright Act grants a copyright holder exclusivity with respect to enumerated actions, such as reproduction, preparation of derivative works, and distribution of copies. It does not grant a monopoly over all uses of the copyrighted work. Section 107 of the Copyright Act provides the affirmative defense of 鈥渇air use鈥 for acts otherwise infringing the exclusive rights of a copyright holder, the test for which includes the following four factors: 

(1)    The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2)    The nature of the copyrighted work;

(3)    The amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4)    The effect of the use upon the potential market for or value of the copyrighted work.

Fair use is an affirmative defense that is applied holistically and has been described as an 鈥渆quitable rule of reason鈥.[3] Courts have typically viewed the first and fourth factors as the most significant, with the fourth particularly important.   

The Anthropic Decision

The materials used by Anthropic included millions of books downloaded from pirated sources, and millions of print books that Anthropic purchased and scanned into digital form with machine-readable text. This was both for the purposes of creating a general research library for potential future use and for training Claude.  

Judge Alsup bifurcated his analysis into the use of books for training the LLM and the use of books to build a central library. He held that both the use for training and the digitization of purchased books to build a central library were fair use, but the use of pirated books to build a central library was not. He made clear that summary judgment did not extend to future copies made from the central library that were not used for training LLMs.   

With respect to the first factor, Judge Alsup held that the purpose and character of using the copyrighted works to train LLMs to generate new text was 鈥渜uintessentially transformative.鈥 The use was not simply to memorize and replicate the works it trained on, but 鈥渓ike a reader aspiring to be a writer鈥 to learn from them and create something different. Accordingly, the first factor weighed in favor of fair use for the training copies.

With regard to copies used to build the central library, Judge Alsup bifurcated his analysis into the pirated copies and those Anthropic purchased in print and then digitally converted. He held that the latter group, which facilitated storage and searchability and did not result in new copies being shared with third parties, was transformative. On the other hand, Judge Alsup held that the use of the pirated works was 鈥渋nherently, irredeemably infringing,鈥 and their use to build a research library was not transformative. Judge Alsup distinguished other decisions, including where copies were unavailable for purchase or loan, copies were transformed into a significantly different form, or the defendant already possessed authorized copies. 

Judge Alsup held that the second factor 鈥 the nature of the copyrighted work 鈥 weighed against fair use because the works at issue involved expressive content, which were entitled to greater protection under copyright laws than factual works.  

Judge Alsup held that the third factor 鈥 the amount and substantiality of the work used 鈥 involved an assessment of whether the amount of copyright-protected material was reasonable in relation to the purpose for copying it. The key to the analysis was not how much text was copied, but how much was made accessible to the public. With respect to training, Judge Alsup held that while the entire books were used, there was no allegation that the material was made available to the public as output. He found that the third factor favored fair use for training because of the large amount of data that Anthropic reasonably needed for training its LLMs. With respect to building a central library, Judge Alsup held that the third factor favored fair use for the purchased copies, but against fair use for the pirated copies, given that Anthropic had no right to hold them at all.

Judge Alsup held that the fourth factor 鈥 market dilution 鈥 also favored fair use regarding training LLMs. He held that the fourth factor focuses on the extent to which the challenged use acts as an actual or potential market substitution for the copyrighted work. Judge Alsup noted that the authors conceded that the LLMs did not produce exact copies or infringing knockoffs of the authors鈥 works. Instead, the authors argued that the LLMs would 鈥渞esult in an explosion of works competing with their works.鈥 Judge Alsup analogized the plaintiffs鈥 argument to a complaint that 鈥渢raining schoolchildren to write well鈥 would also result in an explosion of competing works and held that this 鈥渋s not the kind of competitive or creative displacement that concerns the Copyright Act. The Act seeks to advance original works of authorship, not to protect authors against competition鈥 (citing Sega Enterprises Ltd. V. Accolade, Inc., 977 F.2d 1510, 1523-24 (9th Cir. 1992)).  Judge Alsup also rejected the plaintiffs鈥 arguments that training LLMs would harm an emerging market for licensing work to train LLMs, holding that the Copyright Act does not entitle plaintiffs to exploit such a market that could develop.  

Judge Alsup held that the fourth factor was neutral with respect to the purchased library copies that were converted to digital form and pointed against fair use for the pirated works, given that pirated copies 鈥減lainly displaced demand鈥 for the plaintiffs鈥 books. 

Judge Alsup, weighing all the factors, thus granted Anthropic鈥檚 motion for summary judgment on the issue of fair use with respect to the training copies and books legitimately purchased to build a digital library, but denied summary judgment for Anthropic on the pirated copies, reserving the decision for trial.

The Meta Decision

The Meta decision involved an action by 13 authors against Meta for downloading their works from so-called 鈥渟hadow libraries鈥 of pirated works and using them to train Meta鈥檚 LLM. A key difference between the two decisions was the primary weight that Judge Chhabria gave to the fourth factor and his views, expressed in a lengthy dictum, that in many cases, LLM conduct may fail the fair use test because LLMs often 鈥渄ramatically undermine the market鈥 for the materials on which they train. By way of example, Judge Chhabria speculated that an LLM capable of producing endless books about how to take care of a garden could greatly diminish the market for human-authored garden books. He indicated that Judge Alsop鈥檚 Anthropic decision was overly focused on the transformative nature of generative AI (the first factor in the fair use analysis), 鈥渨hile brushing aside concerns about the harm it can inflict on the market for the works it gets trained on鈥 (the fourth factor).  Judge Chhabria, therefore, appeared to endorse a market dilution argument that, based on Sega, Judge Alsop flatly rejected. This theory was also recently supported by the U.S. Copyright Office in its May 2025 report 鈥淐opyright and Artificial Intelligence,鈥 albeit acknowledging the 鈥渦ncharted territory.鈥 Judge Chhabria raised a number of questions that were implicated in a market dilution analysis, including whether Llama was capable of generating books, and if so, what type of books, what impact it would have on competition, and what the impact on the market for plaintiffs鈥 books would be where Llama could use their books for training versus being unable to use them. 

Both judges rejected another argument concerning the fourth factor that the unauthorized training of LLMs harmed the market for licensing books for LLM training.  Both courts held that this was not the type of market that the Copyright Act entitles the plaintiffs to exploit.  

Regarding the first factor, Judge Chhabria also ultimately agreed that the LLMs鈥 use was transformative, which is key to finding that the first factor favors fair use. But Judge Chhabria took a different approach from Judge Alsup regarding whether the analysis should focus on LLM training as the sole 鈥渦se.鈥  Judge Chhabria rejected the plaintiffs鈥 attempt to bifurcate the analysis into Meta鈥檚 downloading of the books and use of the books for LLM training, stating that the downloading must be considered in light of the ultimate purpose of LLM training. Judge Alsup permitted a bifurcated analysis, albeit with respect to building a library, as opposed to simply downloading. Using this bifurcated approach, Judge Alsup held that the use of pirated works in the library weighed against fair use. Judge Chhabria, on the other hand, just considered the use of shadow libraries in connection with his unitary analysis and dismissed its significance. Judge Chhabria held that while it was relevant to the issue of bad faith, and could have been significant if Meta鈥檚 downloading had been a part of a peer-to-peer file sharing that had helped to perpetuate the shadow libraries, that was not the case here.  

What Are the Implications for the Future Development of LLMs?

There is clear recognition of the significant transformative nature of LLMs, which is an important factor favoring fair use. One weak spot for future decisions is Judge Chhabria鈥檚 endorsement of a market dilution test. But this endorsement should be considered in light of the associated questions he raised. Importantly, this is an inquiry heavily dependent on the nature of the market. It is a safe guess (for now) that most users of LLMs are not writing novels, so the 鈥渆xplosion鈥 of competing, LLM-generated novels may end up being more of a theoretical concern. But for other works, for instance, news articles, biographies, and other nonfiction that can be quickly produced en masse by LLMs, Judge Chhabria suggested that there may be market dilution concerns. Judge Chhabria鈥檚 dictum also applies outside of text-based works. For instance, an LLM training on a specific songwriter鈥檚 catalogue could produce works diluting the market for that artist鈥檚 music or any genre uniquely associated with that artist, disincentivizing the artist and potentially others to continue making music in that space. Appropriate guardrails could limit the exposure to market dilution claims, should the market dilution theory gain judicial traction.

Another takeaway from the decisions is that the use of pirated works in connection with training should be avoided. In Anthropic, the fact that the books were pirated weighed heavily against fair use. And in Meta, Judge Chhabria also left open the possibility that use of pirated works could be relevant to a fair use analysis.

A third takeaway is that it was important in both decisions that the LLMs could not reproduce more than very short passages from the training materials. So LLMs should continue including guardrails that prevent memorization and regurgitation of extensive passages of training materials. For instance, Judge Chhabria鈥檚 decision emphasized how Llama was configured to return no more than 50 words from any given training source.

A related point is that the cases did not involve outputs. Consequently, the decisions do not address the situation where an LLM produces an unauthorized replica of a copyrighted work, whether through a generative process or memorization.

As indicated above, the decisions do not provide a compelling reason to put the brakes on the generative AI industry, nor do markets seem to have viewed them that way. The continued growth will drive further demand for the semiconductor products needed to support that growth. Moreover, even if copyright infringement were found in a future case, the risk of secondary liability for chipmakers seems trivial given available defenses, such as those based on the existence of non-infringing uses.  

[1] Kadrey v. Meta Platforms, Inc., No. 3:23-cv-03417-VC (N.D. Cal. June 25, 2025)

[2] Bartz v. Anthropic PBC, No. 3:24-cv-05417-WHA (N.D. Cal. June 23, 2025)  

[3] Google LLC v. Oracle Am., Inc., 593 U.S. 1, 19 (2021).

 

The post AI Fair Use Decisions Bode Well for the Semiconductor Industry appeared first on 天美传媒; Lardner LLP.

]]>
Aaron Maguregui and Jennifer Hennessy Assess HIPAA Risks of AI Scribes /news/2025/08/maguregui-hennessy-assess-hipaa-risks-of-ai-scribes/ Mon, 11 Aug 2025 18:49:41 +0000 The post Aaron Maguregui and Jennifer Hennessy Assess HIPAA Risks of AI Scribes appeared first on 天美传媒; Lardner LLP.

]]>

The post Aaron Maguregui and Jennifer Hennessy Assess HIPAA Risks of AI Scribes appeared first on 天美传媒; Lardner LLP.

]]>
Beyond the Hype Part 1: Real-World Impact of AI-Powered Clinical Documentation /insights/events/2025/09/beyond-hype-part-1-real-world-impact-ai-powered-clinical-documentation/ Tue, 19 Aug 2025 16:02:19 +0000 The post Beyond the Hype Part 1: Real-World Impact of AI-Powered Clinical Documentation appeared first on 天美传媒; Lardner LLP.

]]>

The post Beyond the Hype Part 1: Real-World Impact of AI-Powered Clinical Documentation appeared first on 天美传媒; Lardner LLP.

]]>
Foley Represents Cortical Ventures as Lead Investor in SAFE Financing for 2nd Set AI /news/2025/08/foley-represents-cortical-ventures-as-lead-investor-in-seed-round-for-2nd-set-ai/ Tue, 19 Aug 2025 17:25:26 +0000 The post Foley Represents Cortical Ventures as Lead Investor in SAFE Financing for 2nd Set AI appeared first on 天美传媒; Lardner LLP.

]]>

The post Foley Represents Cortical Ventures as Lead Investor in SAFE Financing for 2nd Set AI appeared first on 天美传媒; Lardner LLP.

]]>
Aaron Maguregui Assesses HIPAA Challenges as AI Advances in Health Care /news/2025/08/maguregui-assesses-hipaa-challenges-as-ai-advances-in-health-care/ Fri, 01 Aug 2025 20:32:37 +0000 The post Aaron Maguregui Assesses HIPAA Challenges as AI Advances in Health Care appeared first on 天美传媒; Lardner LLP.

]]>

The post Aaron Maguregui Assesses HIPAA Challenges as AI Advances in Health Care appeared first on 天美传媒; Lardner LLP.

]]>
Foley Represents NEA as Lead Investor in Series A Funding for AI Marketing Platform Bluefish /news/2025/08/foley-represents-nea-as-lead-investor-in-series-a-funding-for-ai-marketing-platform-bluefish/ Mon, 25 Aug 2025 17:17:04 +0000 The post Foley Represents NEA as Lead Investor in Series A Funding for AI Marketing Platform Bluefish appeared first on 天美传媒; Lardner LLP.

]]>

The post Foley Represents NEA as Lead Investor in Series A Funding for AI Marketing Platform Bluefish appeared first on 天美传媒; Lardner LLP.

]]>