Litigating the Future: NEPA in the AI Era
Key policy choices will determine if AI helps—or hinders—environmental protection.
In my previous analysis, I explained how NEPA—the law requiring environmental reviews—evolved from simple disclosure rule to monstrous veto point. Today's question is, what happens to that same process when government agencies have access to AI that can generate unlimited bureaucratic text? It turns out, maybe not as much as one would anticipate. While AI could revolutionize how agencies write environmental reports, transformation will likely happen outside the government.1 Examining the impact of AI on the most time consuming parts of the process—drafting reports, processing public comments, and defending decisions in court—reveals why. Together, they show a private sector set to capture serious benefits, and a public sector set to become a punching bag.2
A Brief Overview of the NEPA Process
To understand how these models will affect the NEPA permitting process, it is useful to understand the process itself. An exceedingly condensed version of the process (in chronological order) follows:
Proposed Action
Environmental Impact Statement (EIS) Determination
Project Timetable + Agency Coordination
Notice of Intent (NOI)
Scoping/Preparing Draft EIS
Public Comment
Prepare Final EIS (respond to comments)
File Final EIS
Record of Decision (ROD)
Mitigation & Monitoring Implementation
Supplemental EIS (if triggered)
EPA/CEQ Dispute Resolution (if needed)3
Judicial Review
Of all these steps, the three bolded consume the most time.
The EIS process starts when an agency determines that a federal action—issuing a permit, releasing funds, building a dam—might significantly affect the environment. The agency then spends years drafting a comprehensive report. Once complete, the draft goes public. Citizens submit comments. Then, the agency responds to each substantive comment and revises the report accordingly. Only after can it issue a ROD approving or denying the project.
But approval is not the end. For six years after the ROD, anyone with standing can sue to block the project. As long as plaintiffs clear the initial pleading hurdle, the project typically freezes while courts review thousands of pages of documentation. After several years acquiring evidence the court will rule the report insufficient and vacate the decision or declare it complete, at which point the aggrieved party may appeal the decision.
Within the process, two tasks consume the vast majority of time and effort: drafting and commenting. After approval, judicial review stands as the highest hurdle. In examining how AI is likely to change each, there are two yardsticks by which to measure: Do environmental protections improve? Does approval come faster? The ideal outcome achieves better environmental protection in less time. The nightmare scenario delivers neither; imagine a dam that takes 45 years to approve and still kills 1000 grizzly bears.
Drafting
To apply AI, it helps to break down drafting into subtasks. The core of each report is four sections with attached appendices. These sections answer four questions: why is this being suggested (the purpose and need statement)?, what the proposed action will be compared to (alternatives analysis)?, what are the resources in question that may be affected (affected environment)?, and what will be the impact on each resource for each alternative (environmental consequences)? Appendices contain data and modeling assumptions that support the assertions made in the other four sections.
Writing these reports is where the models can be most straightforwardly useful. They are exceedingly long, often clock in at more than 1000 pages (including appendices), and are neither well-written nor rigorously analytical. To give you a taste, here is a randomly selected passage:
The water resources objective was developed to ensure the LTEMP does not affect fulfillment of water delivery obligations to the communities and agriculture that depend on Colorado River water and remains consistent with applicable determinations of annual water release volumes from Glen Canyon Dam made pursuant to the Long-Range Operating Criteria (LROC) for Colorado River Basin Reservoirs, which are currently implemented through the 2007 Interim Guidelines for Lower Basin Shortages and Coordinated Operations for Lake Powell and Lake Mead.
A primary aspect of reservoir operations that potentially affects water resources is related to the monthly distribution of the Lake Powell annual release volume and its resulting impact on reservoir elevations, operating tiers, and annual release volumes. Changes to monthly release volumes have the potential to, in critical time periods, affect reservoir elevations for operating tier determinations, which could in rare circumstances affect annual release volumes.
This style of writing may be sleep-inducing, but LLMs excel at producing it. Further, I suspect when faced with the question, very few people would argue that a human writing 2,000 pages of this schlock for a report no one will read is a good use of taxpayer dollars. The existence of a written artifact justifying government action may be prescribed, but its method of production need not be. As long as a human honestly confirms that the text is a faithful representation of the work undertaken, AI-generated reports are a pure good. Implementation could plausibly save a year of full-time writing work per EIS.4
LLMs could also solve the agency coordination issues that currently add months—and in some cases years—to the NEPA process. Currently, agencies waste enormous time discovering which permits they need, which laws apply, and which agencies must sign off. An AI system could change this overnight.
Picture it: You’re an official who has been asked to plan a dam on the Colorado River. Instead of scouring the library, you start by querying a database containing thousands of past environmental reviews.5 Then, using the details of the particular project, you submit a query to whittle down the list of potential alternatives. Within minutes, it suggests which project alternatives minimize environmental harm based on what worked before. After settling on the appropriate alternatives, the model indicates which effects can be analyzed using existing data and which will require time-intensive fieldwork. Then it works with you to propose a timeline of the required fieldwork—with fallbacks for unexpected hiccups. This back-and-forth could cut the months of typical prep work to weeks, and substantially reduce the number of unanticipated delays (e.g., missing a critical breeding season) that hold up publication.
This is all achievable with current technology, but is far from the likeliest outcome. Without training and clear guidance on the use of AI, most bureaucrats will shy away from taking advantage of the technology, and those who do will be selected for using it to replace their work, rather than augment it. For project-sponsor-led documents, these fears are less well-founded, but the coordination problem—spending the necessary funds to structure the EPA’s collection of EISs into a useful format—remains.6 Until that hurdle is overcome, both agencies and sponsors are likely to see only small improvements in drafting.
Measuring with the two yardsticks, the likeliest outcome (i.e., no guidance with bureaucrats doing on the side) is a slight decrease in the quality of environmental analysis (due to the lack of double-checking), with little to no decrease in review time for agencies. Project-sponsored reviews are more likely to speed up significantly, but given the incentives will see little to no improvement in environmental protections. If the hurdles of proper guidance, training, and database creation are overcome, both types of reviews will speed up while strengthening environmental review.
Public Comment
The public comment sections of NEPA review add an additional wrinkle: two-party AI use.
First, the positives. Today, meaningful participation in environmental reviews requires background knowledge that can leave out locals who may have critical information. For example, a farmer worried about increasing salinity in their irrigation water might know the danger intimately but lack the expertise to write a technical comment that agencies take seriously. AI changes this. That farmer can now describe the problem in plain language, and AI will translate it into the precise, citation-heavy format that gets attention. AI, if allowed to do so, could give agencies the ability to automatically sort through the corpus of submitted comments and retrieve only the most relevant. If done in good faith, AI will reduce the transaction costs of sharing relevant information on both sides and improve the quality of projects constructed.
Unfortunately, if there is one thing the notice-and-comment process is not known for, it’s good faith. An actual notice-and-comment process typically consists of: a flood of completely useless comments from busybodies, a handful of detailed comments from the relevant lobbying groups (which can be helpful but are incredibly skewed), a dozen or so claims that the entire ecosystem will collapse if the project is approved (see: the Sierra Club or the Center for Biological Diversity) and 3–5 relevant, useful comments from citizens.
With AI, this ratio is likely to get worse. Now, if an individual wishes to see a project stopped, they can simply input the draft proposal or EIS into an AI and ask it to find potential holes in the analysis. It’s worth remembering that the criticisms need not actually be reasonable and substantive, they simply need to seem so. Prior heuristics indicating substance such as proper formatting, citations, and In the future, a flood of thousands of irrelevant—but seemingly substantive—comments generated by AI will swamp the handful of relevant comments that could improve the project.
Unlike during the drafting process, agencies will be forced to respond to AI notice-and-comment. The burden of response will be too high not to.7 I see two possibilities based on the scale of the flood. If the increase in substantive-looking comments is more than an order of magnitude, agencies will be forced to adopt LLMs to group/respond to comments out of necessity. If the increase is smaller, they may simply plug along, leaving only private project sponsors using LLMs to improve their response speed.
Regardless of which path is taken, the impact of AI on the public comment process is overwhelmingly likely to be negative. Increased accessibility for useful commenters is almost certain to be outweighed by AI slop. The notice-and-comment system seems destined to become a kabuki theatre of fake AI-generated concerns responded to by rote AI-generated responses, with no useful information transferred between parties. Much like a Pacific Island cargo cult, it will retain form but not purpose. Whether this devolution increases or decreases decision timelines will depend on the extent to which agencies are allowed to use these technologies, and the discretion granted to their responses by the courts. All in all, not great.
Judicial Review
Finally, we get to the judicial review process. Fortunately, NEPA judicial review is so poorly structured that it would be difficult to do worse. As with public comment, there will be two countervailing forces. On one side, agencies and sponsors will have the ability to red-team their EISs and ensure that no obvious litigation attack angle exists. A few hours querying the executive summary should reveal any clear missing components (at least to the extent they are included in regulations and not case law). As a result, EISs will become more legally sound. On the other hand, the cost of EIS analysis to draft a complaint will decrease substantially. The ease of spotting oversights will force drafters to red-team with AI, lest the EIS be held up in court by a cheaply drafted complaint.8
For the above considerations, I expect AI to improve judicial review the most, increasing speed and environmental protection. Smart agencies will fix these identified red-team gaps, including gathering better field data on actual environmental impacts, making reports and protections stronger.
The Supreme Court recently handed agencies another advantage. In the Seven County decision, the Court ruled that agencies have broad discretion to decide what belongs in an environmental review. Legal deference in combination with AI-powered red-teaming will leave lawsuits facing much steeper odds. Judges will see fewer valid complaints because agencies will have already addressed them.
This new equilibrium will be highly dependent on adoption by the relevant parties: agencies, sponsors, and plaintiffs. To the extent that any fail to effectively use AI (or are disallowed in the case of agencies), they will be heavily punished. On net, with full utilization by all three, the balance is likely to shift in favor of agencies and sponsors.
Conclusion
On the whole, I anticipate that LLMs will increase the speed of review while maintaining approximately the current level of environmental protection. My uncertainty is highest around the notice-and-comment process, where I suspect the impact will be largest. Use of LLMs in agency-led reviews and to bullet-proof EISs, when combined with the Seven County decision, should substantially decrease the number of lawsuits agencies lose over time.
Additionally, I suspect there will be a substantial shift from agency to sponsor-led reviews. As the gap in LLM adoption between the public and private sectors grows, the cost and speed differentials will as well. Even with required agency oversight, the ability of private actors to coordinate, draft, and bulletproof environmental reviews will buy a meaningful amount of time. The extent of this shift will be determined by government LLM implementation processes in the coming years.
Already, sponsor-led EISs are somewhere between 10 and 40 percent faster than those led by agencies.9 Given that sponsor-led projects are selected to be large and complicated (thus able to bear the burden of expensive environmental consultants), I expect the average complexity of sponsor-led reviews to decrease as lower costs open the market to previous marginal projects. This will enhance the perceived gap in speed between sponsor and agency led projects, and drive further privatization. Without a serious focus on improving state AI capacity, private projects will zip through reviews while public works and the communities that rely on them sit stranded in gridlock.
One promising government driven tool is the Pacific Northwest National Laboratory’s PermitAI program.
For purposes of concision, this post will only cover the core aspects of the NEPA process: the writing and public comment of the EIS; and the associated lawsuits. I do anticipate models will be quite capable of effectively applying categorical exclusions to federal actions, as well as draft RODs or help with monitoring efforts.
Unclear if this is still within CEQ authority with recent Marin Audubon case and EO; but answer is not critical to discussion, so has been left in.
Drafting a 1,000‑page EIS requires ≈250 writer‑days at ~4 pages/day ; adding review at ~1 hr/page (~1,000 hr ≈125 writer‑days) yields ≈375 writer‑days; an LLM at 1 page/5 min produces a first draft in ≈83 hours (≈10.4 days) and—with ≈2 writer‑days for prompt‑tuning and light edits—yields a net saving of ≈362 writer‑days.
With every EIS captured by the EPA, there already exists a corpus of review standards that could serve as a baseline for a fine-tuned model.
Given the cost and cost of capital of certain NEPA projects, such as hyperscaler data centers, such a project may be net beneficial for a single company to undertake if it is expected to save months of review, thereby solving the coordination problem.
As CEQ regulations have been rescinded, agencies are technically required to only consider the comment as per the APA, rather than respond. This could end up saving the process.
There are additional considerations of how AI is likely to change the balance of power in courtrooms more generally, but I will tackle those in a separate post.
Large range here, see the 2017 DoE Lessons Learned or the FPISC Annual Report.