Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, August 29, 2025

Attorney Faulted For Submitting Brief with AI Hallucinations


Examples of attorneys getting trouble for utilizing AI tools for legal research and then not checking the accuracy of the information gathered has occurred in Pennsylvania.  Inaccurate information secured from AI sources are known as hallucinations.

In the Pennsylvania federal court case of Jakes v. Youngblood, No. 2:24-cv-1608 (W.D. Pa. June 26, 2025 Stickman, J.), the court faulted an attorney for submitting briefs with wholly fabricated quotations from case law, including fabricated quotations from this court’s own prior Opinion. 

The court faulted that attorney for not only failing to offer any explanation for the deficiencies and fabrications in his own brief, but for also attacking the content of the opposing party’s brief, which the court noted did not contain any fabricated quotations or misrepresented case law. 

The court also noted that, “[e]ven more outrageously,” a review of the AI-happy attorney’s reply brief demonstrated that that brief also contained fabricated quotes and misrepresented case law.

The court noted that it found it to be “very troubling” that, when accused of serious ethical violations, the attorney at fault “chose to double down” and not admit to wrongdoing. 

In the end, the court noted that it viewed the attorney’s “conduct as a clear ethical violation of the highest order.” 

In its Opinion, the court noted that the attorney at fault had filed a Withdrawal of Appearance in response to the issues presented.

This Pennsylvania federal court cited to Federal Rule of Civil Procedure 11 as confirming that attorneys have legal and ethical duties owed to the court in terms of filings presented to the Court. The court also cited to Pennsylvania Rule of Professional Conduct 3.3 regarding candor toward a tribunal.

In its Opinion, the court presumed that the at fault attorney’s briefs were constructed by generative artificial intelligence utilized by the attorney, rather than an effort by the attorney to personally construct false and misleading information. Regardless, the court noted that the attorney still had an ethical obligation under Rule 11 and the state’s professional canons to review every document submitted to the court under their name and signature in order to ensure the accuracy of the document.

The court also noted that, an attorney who signs and files a brief authored by a non-lawyer, such as a paralegal or an intern or a clerk, is personally responsible for all that the filing contains. The court noted that the same rule applies to the use of artificial intelligence.

In the end, the Jakes court dismissed the filings presented by the at fault attorney and ordered that attorney to show cause as to why his filings should not be viewed as having violated Rule 11 and Pa. RPC 3.3.


Anyone wishing to review a copy of this decision may click this LINK

Source of image:  Photo by Igor Omilaev on www.unsplash.com.

Another Pennsylvania Attorney Sanctioned by Court for Submitting Inaccurate Citations Apparently Secured From AI Resarch


In another Pennsylvania case involving an attorney utilizing AI hallucinations in a court filing, the court issued sanctions.

In Bevins v. Colgate-Palmolive Co., No. 25-576 (E.D. Pa. April 10, 2025 Baylson, J.), the attorney provided the court with case citations in court filings that were inaccurate and did not lead the reader to any identifiable court Opinion. The court noted that, based upon its search, it could not locate a case relative to the two citations at issue and/or could not detect a possible typographical error relative to the citation provided.

When the court ordered the attorney to provide an explanation, the attorney asserted that the inclusion of the incorrect citations was unintended given that he planned to replace the wrong cite with a proper one but failed to do so in his final draft. The court noted its concern as to why the attorney was silent as to his act of providing the court with case citations to decisions that did not exist and, as such, the court noted that it was “unconvinced by counsel’s explanations.”

The court referred to Rule 11 and sanctioned the attorney. The court also referred the matter to the State Bar.

Moreover, the court struck the attorney’s appearance in the case.  The attorney was ordered to advise the client of the sanctions and the fact that, should the Plaintiff chose to refile her case, she must find new counsel.


Anyone wishing to review the court's decision in Bevins may click this LINK.  The Court's companion Order can be viewed HERE.

Third Circuit Addresses AI Hallucinations in Court Filing


In the case of McCarthy v. U.S. DEA, No. 24-2704 (3d Cir. July 21, 2025) (Op. by Chung, J.) (not precedential), the Third Circuit Court of Appeals, in a case involving issues arising from the Drug Enforcement Administration revoking a Certificate of Registration to the Plaintiff who was a P.A. the court chastised the Plaintiff’s attorney for relying upon “summaries” of eight (8) previous DEA adjudications that the attorney secured through research on an artificial intelligence tool. 

The court confirmed that the Plaintiff’s counsel acknowledged that seven (7) of the summaries were inaccurate and the eighth did not exist. The attorney further acknowledged to the court that he “never took care to confirm the accuracy of the summaries or even that the decisions existed.”

The court confirmed that it would not consider this faulty portion of the Plaintiff’s attorney’s Brief.

In this decision, the court also noted that it was separately ordering Plaintiff’s counsel to show cause why he should not be sanctioned for his conduct “particularly for his lack of candor to the court.” See Op. at 7 n. 5.

Anyone wishing to review a copy of this decision may click this LINK.

Thursday, August 21, 2025

Article: AI and Its Proper Use in the Practice of Law

The below article written by both myself and my son, Michael, entitled "AI and Its Proper Use in the Practice of Law" appeared in the August 14, 2025 edition of the Pennsylvania Law Weekly and is republished here with permission.   Michael is a computer science and philosophy major at Ursinus College with a focus on AI research.














Expert Opinion  Artificial Intelligence


AI and Its Proper Use in the Practice of Law


August 14, 2025, Pennsylvania Law Weekly

By

Daniel E. Cummins

Michael Cummins
















While many articles on AI and the law have shouted “AI is coming! AI is coming!” like Paul Revere galloping through the night, very few of those articles actually provide advice on how to incorporate AI into your law practice.

With this article by a practicing attorney and a budding computer scientist, information is provided not only on the basic terms of art relative to AI and its uses, but advice is also provided on the nuts and bolts of how to begin to properly utilize AI as part of your practice.

Duty to be Competent With Technology

Under Pennsylvania Rule of Professional Conduct 1.1, lawyers are required to continue to work to maintain and improve their competency in the practice of law. Rule 1.1 states, in part, that the provision of competent legal representation to a client “requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation.”

This rule has been construed to require attorneys to remain competent with advancements in technology that can improve one’s ability to represent clients. As such, it is not only good for one’s practice to begin the process of becoming proficient in the use of AI in your practice, some may argue that it is required under the rules of ethics.

Common AI Terms to Know

Here are some common terms related to AI that attorneys should know and understand:

Algorithm—A process or set of instructions written by a computer programmer to be followed by a computer.

Artificial intelligence—The development of a series of algorithms that instruct computers to complete tasks which typically require human intelligence, reasoning, and understanding, including visual and audio perception, speech recognition, decision making, etc.

Machine learning/deep learning—Machine learning is a branch of AI which focuses on enabling computers to learn from data and improve their performance without explicit, ongoing programming. Deep learning is a form of machine learning where the computer utilizes multiple layers of processing in order to extract even more information from data.

Large language model (LLM)—A type of AI which utilizes deep learning to process and can generate human language by recognizing patterns and associations. Such AI models are described as "large" due to the massive data sets containing billions of words and parameters. ChatGPT and Gemini are examples of Large Language Models

ChatGPT—An AI tool developed by OpenAI to engage in written conversations with humans to answer questions, complete tasks or follow prompts. You can try it for free at www.chat.com.

Gemini—An AI tool developed by Google very similar to ChatGPT that can generate responses to queries by pulling information from the internet and presenting it in a conversational manner to the reader.

Hallucinations—Hallucinations occur when an AI model produces a response that is factually incorrect and/or nonsensical, but is supported by the existing data that the AI was trained on. This can occur due to datasets that are poorly gathered and maintained or incomplete. Examples of hallucinations would be ChatGPT providing case citations that are inaccurate or even totally fabricated.

AI Is Just Guessing

While it is a common misperception to say that ChatGPT "knew" the answer or that Google AI Overview "understood" what you were searching for, that is actually far from the truth.

Artificial intelligence is essentially a prediction algorithm using an unimaginable number of parameters and associations to give the appearance of knowledge; in other words, AI platforms such as ChatGPT or Google’s Gemini give you their best guess at what you would want to hear based on your prompt and the data it was trained on.

On many occasions, an AI platform may produce a result that is factually correct since it has been trained only on the adequate data for a specific need. However, if it is asked a question or prompted on something that it has not been trained on, there is a good chance it may hallucinate and give a false response. Since AI is predictive in nature, it will only ever give you its "best" response possible within the limitations of the tool. Of course, AI can never give a "truthful" response because truth is foreign to predictive AI.

A good analogy is to think of AI as a contestant on Final Jeopardy. The contestant (AI) is given a query or a prompt. The contestant (AI) then searches through the recesses of his or her mind and knowledge (the data it was trained on) in the hopes of coming up with the correct response. The contestant (AI) provides the response, not knowing if it is a correct response. The only difference between the contestant and AI is that the contestant may give up and admit they do not know the answer; AI will always generate its "best guess" even if it has been trained on none of the relevant information.

Nuts and Bolts of How to Use AI

For instructions on how to try out AI, we will use the most popular AI tool at the moment, ChatGPT.

ChatGPT is free to try out. There are more detailed uses of ChatGPT that you could pay to utilize but, at least for now, anyone can use the basic form of ChatGPT for free.

You can find ChatGPT at www.chat.com. When you go to the site, a box may pop up asking you to log in or sign up, but that is not necessary. You can click “Stay logged out” instead.

You can utilize ChatGPT to conduct a search like you would on Google, but be sure to verify and triple check the responses. You can also give ChatGPT an “assignment” such as following examples:

Research the current status of the law on Limited Tort in Pennsylvania and include case citations;

Draft a Brief outlining the Hills and Ridges Doctrine in Pennsylvania and include case Citations

Draft Interrogatories applicable to a fire loss subrogation case

Provide deposition questions applicable to a dog bite case

ChatGPT will search within its pretrained database for information and will respond with detailed information in response to the queries.

It is crucial to keep in mind that the most important practice to follow when using AI is to verify everything it generates. Whether you are asking it to give you a starting point for research, assist with discovery efforts, or draft documents, you should double check every aspect of its output for hallucinations, that is, for inaccurate, inapplicable or even false information.

How you prompt AI, or submit your queries, is also very important as it can assist in generating more accurate and beneficial responses. When writing prompts, giving the AI context as to your goals and relevant background information is very important. Clarity is also important, and, therefore, breaking down a singular prompt into multiple parts with clear instructions can also yield better results.

Ultimately, AI should be used as more of a tool to assist in menial tasks, rather than a one-stop-shop to replace human ingenuity. To paraphrase what one judge wrote in an Opinion involving an AI issue, the use of artificial intelligence also requires the use of human intelligence.

As noted in greater detail below, the reliance upon artificial intelligence to complete legal research without also verifying the veracity of the citations through other trusted resources is not only dumb, but can also land you in hot water.

Hallucinations All Around the World

Attorneys and judges from all around the world have begun to utilize AI to assist with their legal research and brief writing. As such, a few of those attorneys and judges have been getting in trouble for failing to check the validity and accuracy of the legal citations secured through the use of AI platforms prior to filing documents of record.

An attorney/computer data scientist located in Paris, France by the name of Damien Charlotin has created a worldwide scorecard of sorts documenting cases from around the world where attorneys have been sanctioned for filing briefs and other documents with a court that contain AI hallucinations, or case citations that are improper, invalid, or just fabricated.

As noted on the compilation created by Charlotin, not only lawyers, but judges have also been tripped up by the use of AI. In Georgia, it took an appeals court to reveal that, not only did an attorney in the lower court, but also the lower court itself, had relied upon and cited to bogus case citations secured via AI research. In that case of Shahid v. Esaam, 2025 Ga. App. LEXIS 299, at *3 (Ga. Ct. App. June 30, 2025), the Georgia court of appeals struck the lower court order, remanded the case and sanctioned the attorney involved.

More recently, a New Jersey district court judge withdrew his decision in the biopharma securities case of In Re CorMedix Securities Litigation, 2:21-CV-14020 (D. N.J. July 22, 2025 Neals, J.), after the lawyers involved in the case complained that his opinion contained numerous errors, including made up quotes, misstated case outcomes and incorrect case citations, all presumably secured from research on an AI platform. The court withdrew its published decision and noted that another opinion and order would be issued.

With regards to attorneys running afoul from the use of AI research in their filings, according to the above scorecard, as of July 25, 2025, there were at least 230 cases from around the world where a court had determined that a filing contained AI produced hallucinated content, typically fake citations. Of the 230 instances from around the world, 130 of those cases were found in the United States. Of those numerous cases found in the United States, at least four cases arose in the Commonwealth of Pennsylvania.

Attorneys From Pennsylvania Who Hallucinated

As noted, the filing of court documents containing hallucinations in the form of faulty or fake legal citations has led to sanctions in at least four Pennsylvania cases. The Pennsylvania federal courts who have addressed these issues have found that the submission of court filings with faulty citations amounts to violations of Fed.R.C.P. 11 (by signing a filing, an attorney certifies the accuracy of the legal arguments contained therein), and violations of the Rules of Professional Conduct 1.1 (Competency) and 3.3 (Candor Towards a Tribunal).

In the nonprecedential decision by the U.S. Court of Appeals for the Third Circuit in the case of McCarthy v. U.S. DEA, No. 24-2704 (3d Cir. July 21, 2025 Chung, J.)(Not Precedential), the court addressed the DEA’s revocation of a physician’s assistant’s certificate of registration. The petitioner’s attorney was caught having submitted a filing that relied, in part, on “summaries” of eight previous DEA adjudications in support of arguments on behalf of the petitioner.

After it was determined that seven of the summaries were inaccurate and that the eighth decision did not even exist, the petitioner’s attorney acknowledged the same and admitted that the summaries had been secured through research on an AI tool. The court confirmed in its decision that the petitioner’s attorney confirmed that “he never took care to confirm the accuracy of the summaries or even that the decisions existed.” See McCarthy at p. 7. The court ruled that it would not consider the portion of the brief that contained the hallucinated information and issued a separate Order requiring the at fault attorney “to show cause why he should not be sanctioned for his conduct, particularly for his lack of candor to the court.”

In the separate case of Bunce v. Visual Technology Innovations, No. 2:23-CV-01740 (E.D. Pa. 2025 Kai, J.), a defense attorney admittedly utilized ChatGPT to draft his filings at issue relative to a discovery issue. The filings submitted by the defense counsel contained fake citations that could not be located on trusted resources.

The court in Bunce found violations of Fed.R.C.P. 11 and sanctioned counsel. While the court emphasized that nothing in Rule 11 prohibits use of AI in the practice of law, Rule 11 makes clear that an attorney who signs a filing is responsible for verifying the accuracy of the legal and factual claims contained within the filing.

In the case of Jakes v. Youngblood, No. 2:24-cv-1608 (W.D. Pa. June 26, 2025 Stickman, J.), the court faulted an attorney for submitting briefs with wholly fabricated quotations from case law, including fabricated quotations from this court’s own prior opinion. The court also noted that, “even more outrageously,” a review of the attorney’s reply brief filed in the same case revealed that that brief also contained fabricated quotes and misrepresented case law.

The court noted that it found it to be “very troubling” that, when accused of serious ethical violations, the attorney at fault “chose to double down.” In the end, the court noted that it viewed the attorney’s “conduct as a clear ethical violation of the highest order.” In its opinion, the court noted that the attorney at fault had filed a withdrawal of appearance.

This Pennsylvania federal court cited Federal Rule of Civil Procedure 11 and Pennsylvania Rule of Professional Conduct 3.3 (Candor Toward Tribunal) as confirming that attorneys have legal and ethical duties owed to the court. The court noted that, an attorney who signs and files a brief authored by a nonlawyer, such as a paralegal or an intern or a clerk, is personally responsible for all that the filing contains. In the end, the Jakes court dismissed the filings presented by the at fault attorney and ordered that attorney to show cause as to why his filings should not be viewed as having violated Rule 11 and Pa. R.P.C. 3.3.

In another Pennsylvania case involving an attorney utilizing AI hallucinations in a court filing, the court issued sanctions. In Bevins v. Colgate-Palmolive, No. 25-576 (E.D. Pa. April 10, 2025 Baylson, J.), the attorney in trouble had provided the court with case citations that were inaccurate and did not lead to any identifiable court opinions. The court noted that, based upon its research, it could not locate a case relative to the two (2) citations at issue and could not detect a possible typographical error in the citations provided.

When the court ordered the attorney to provide an explanation, the attorney asserted that the inclusion of the incorrect citations was unintended given that he planned to replace the wrong citation with a proper one but failed to do so in his final draft. The court noted its concern as to why the attorney was silent as to his act of providing the court with case citations to decisions that did not even exist and, as such, the court noted that it was “unconvinced by counsel’s explanations.”

This court also referred to Rule 11 and sanctioned the attorney. The court additionally referred the matter to the state bar, struck the attorney’s appearance in the case, thereby actually removing the attorney from the case. The court further ordered the attorney to advise the client of the sanctions and the fact that, should the Plaintiff choose to refile her case, she must find new counsel.

The above court decisions confirm that the use of unverified AI legal research in court filings could lead to serious sanctions if hallucinated citations or quotes or summaries are utilized. As one court noted, confirming the validity of one’s legal research and case citations is one of the most basic requirements that has always been present in the practice of law. The decisions on this issue confirm that the courts will rightfully take a hard stance against attorneys who submit hallucinated content to the court. Such a hard stance is required to protect the integrity of the record and the court system as a whole.

Anticipated Rules of Court on the Use of AI

With the rise of the use of AI in the practice of law, the federal and state courts have begun to take steps to promulgate rules and parameters to monitor the same.

In innovative fashion, U.S. District Court Judge Karoline Mehalchick of the Middle District of Pennsylvania crafted a civil practice order on the use of generative artificial intelligence, which appears to be the first of its kind at least in Pennsylvania.

Under that order, issued in all of Mehalchick’s civil cases, attorneys who utilize AI in the drafting of any of their court filings are required to file a certification with the court that identifies what AI platform was utilized, delineates what portion of the filing contains AI generated content, and certifies to the court that the filing attorney checked the accuracy of the AI generated content, including all references to case citations and legal authority.

In her order, Mehalchick also directs that the parties review the joint formal opinion of the Pennsylvania Bar Association and the Philadelphia Bar Association on the “Ethical Issues Regarding the Use of Artificial Intelligence.”

On the state court level, the Pennsylvania Supreme Court created the advisory committee on artificial intelligence in the Pennsylvania courts in order to monitor the use of AI in the court system. One possible recommendation that may come out of the advisory committee might be for the promulgation of a statewide rule of civil procedure on the use of AI in the practice of law, particularly with regards to court filings.

As the future continues to arrive, it is anticipated that the attorneys and judges in Pennsylvania will continue to adapt and the practice of law, hopefully, will improve as a whole.

Daniel E. Cummins is the managing attorney at Cummins Law where he focuses his practice on motor vehicle and trucking liability cases, products liability matters, and premises liability cases. He also serves as a mediator for the Federal Middle District Court and for Cummins Mediation. He is additionally the sole creator and writer of the Tort Talk Blog at www.TortTalkcom. Michael Cummins, Daniel's son, is a computer science and philosophy major at Ursinus College with a focus on researching artificial intelligence.

Reprinted with permission from the July 24, 2025 edition of the "The Pennsylvania Law Weekly © 2025 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.

Sunday, August 25, 2024

Judge Karoline Mehalchick of Middle District Federal Court Issues Novel Civil Practice Order on the Use of Generative Artificial Intelligence in Documents Filed With Court

In the case of E.J. v. Johnson, No. 3:23-CV-1636 (M.D. Pa. Aug. 19, 2024 Mehalchick, J.), Judge Karoline Mehalchick of the Federal Middle District Court of Pennsylvania has taken the innovative step of issuing a Civil Practice Order on Use of Generative Artificial Intelligence.

Under this Order, which appears to be the first of its kind in the Federal Middle District Court of Pennsylvania, and possibly even the first of its kind out of any state or federal court in Pennsylvania, Judge Mehalchick ordered that if a party to any litigation pending before her has utilized AI in preparation of any filing, that must include a Certificate of Use of Generative AI with the filing.

In that Certificate of Use of Generative AI, the party is required to disclose and certify the following:

(1) The specific AI tool utilized

(2) The portions of the filing prepared by the AI program; and

(3) That a person has checked the accuracy of any portion of the document generated by AI, including all citations and legal authority

In the Order, Judge Mehalchick cautioned that failure to comply with this Civil Practice Order could result in sanctions.

Judge Karoline Mehalchick
M.D. Pa.

Judge Mehalchick also directed that all parties and counsel review the conclusions on pgs. 15-16 of the Joint Formal Opinion of the Pennsylvania Bar Association and Philadelphia Bar Association regarding the use of Artificial Intelligence.  That Joint Opinion can be viewed at this LINK.

Judge Mehalchick otherwise concluded her Order by directing counsel to be mindful of their ethical and professional obligations before the Court in this regard.

Anyone wishing to review Judge Mehalchick's Civil Practice Order on the Use of Generative Artificial Intelligence may click this LINK.


I send thanks to Attorney Jerry Geiger of the Stroudsburg, PA law firm of Newman Williams, P.C. for bringing this novel Order to my attention.

In terms of any such Rule in the Pennsylvania state courts, the word is that the Pennsylvania Supreme Court plans to address AI issues on a statewide basis with uniform rules so as to avoid issues with different counties having different local rules on the matter.

Source of AI image: Photo by Solen Feyissa on www.pexels.com.

Thursday, July 13, 2023

A Beginners Glossary of AI and ChatGPT Terms by Judge Richard B. Klein (Ret.)


AI and ChatGPT have been taking over the headlines around the world, including within the legal field.  One article goes so far as to call this new Age as part of the "the Fourth Industrial Revolution."  

The Rules of Professional Responsibility regarding competence (Rule 1.1)note that lawyers have a duty to keep up with the changing ways.

To assist in that regard, here is a LINK to an excellent article by Superior Court Judge Richard B. Klein (Ret.) entitled "A Beginner's Glossary of AI and ChatGPT Terms:  Fifteen Key Terms For Lawyers" in which Judge Klein provides plain English definitions for common terms in this new field.

This article is republished here with the permission of Judge Richard B. Klein.  I send thanks to Judge Klein for allowing the republication of the article here for the benefit of the readers of the Tort Talk Blog.