Hey friends. We talk a lot on this blog about the cool side of AI art, new models, better hands, wild upscaling tricks. But there is a story running in the background that every working illustrator should actually be paying attention to, and it is not about whether Flux beats Midjourney this month. It is about whether artists get to keep any legal control over their own life's work in the age of scraped training data.
The lead face of that fight is a concept artist named Karla Ortiz, and her case is still very much alive in April 2026.
Who Is Karla Ortiz, Actually?
If you play Magic: The Gathering or you have seen a Marvel Studios concept board, you have already seen her work. Karla Ortiz is a San Francisco based illustrator known for character and concept art on projects like Doctor Strange, Black Panther, Guardians of the Galaxy Vol. 2, Loki, and dozens of Magic: The Gathering cards. She is not a hobbyist with a grudge. She is a senior working professional in a field where your portfolio is literally your career.
In July 2023, Ortiz testified in front of the U.S. Senate Judiciary Subcommittee on Intellectual Property about generative AI and the art industry. Her core argument was simple and it has not changed. Companies trained their image models on billions of images scraped from the web, including copyrighted illustrations, without permission, without credit, and without a dime of compensation. Then they turned around and sold those models as products that can imitate the exact artists they were trained on.
That testimony is a great primer if you want to hear the artist side of this argument in plain English, not lawyer speak. She was not asking for AI to be banned. She was asking for consent, credit, and compensation. The three C's that have basically become the rallying cry for working illustrators worried about their industry.
The Lawsuit: Andersen v. Stability AI
Back in January 2023, Ortiz, cartoonist Sarah Andersen, and illustrator Kelly McKernan filed a class action copyright infringement lawsuit in the Northern District of California. The defendants are a who's who of image AI, Stability AI, Midjourney, DeviantArt, and later Runway. The case is formally known as Andersen v. Stability AI Ltd.
The core claim is that the LAION dataset, a collection of roughly 5 billion image and caption pairs used to train Stable Diffusion, was stuffed with copyrighted artwork that was never licensed. The artists argue that the models themselves are derivative works, built on top of their illustrations without permission, and that every generation that imitates their style is a downstream infringement.
The defendants have argued, as you would expect, that training on publicly available images is transformative fair use and that the models do not actually store copies of any specific image. That disagreement is basically the whole ballgame in AI copyright law right now.
Where the Case Actually Stands in 2026
Here is the part that a lot of coverage gets fuzzy on. This case has not been thrown out. It has also not produced a final ruling on the merits. It is grinding through the federal courts the way big class actions always do, slowly and painfully.
The key moment came on August 12, 2024, when U.S. District Judge William Orrick ruled that the artists could continue pursuing their core copyright claims. Judge Orrick wrote that the artists had reasonably argued Stable Diffusion may have been built "to a significant extent on copyrighted works" and was "created to facilitate that infringement by design." That language matters. A federal judge signaled, in writing, that the argument is serious.
There was a tradeoff. Judge Orrick dismissed several claims with prejudice, including some of Ortiz's specific copyright allegations because she had not registered her copyrights before filing the lawsuit, which is a procedural requirement under U.S. copyright law. Her lawyers clarified in oral argument that they were not pursuing those particular claims on her behalf. She is still a named plaintiff and a central voice in the case, but the technical copyright claim for those unregistered works is gone.
Since then, the case has moved into discovery. That is the phase where the plaintiffs get to subpoena the AI companies for internal documents, training data logs, and engineering records. Discovery is boring on the outside and explosive on the inside. Every email, every slack message, every dataset commit gets turned over. This is where a lot of AI companies get nervous.
Why This Case Matters For Every Illustrator, Not Just the Plaintiffs
Even if you have never heard of Karla Ortiz, this lawsuit directly affects how the next decade of illustration is going to work. Here is the plain English version.
If the artists win, or even if they force a meaningful settlement, the precedent will be that AI companies cannot just scrape copyrighted art off the web, train a commercial product on it, and walk away. That could force real licensing deals, opt out registries, and some form of payment to the artists whose work fed the machine. It would not kill AI art. It would change the economics of it, which is actually what most working artists are asking for.
If the AI companies win on fair use, the precedent goes the other direction. Training on anything publicly visible on the internet becomes legally safe, and the legal pressure on tools like Stable Diffusion and Midjourney basically evaporates. Working illustrators would be left with only market based strategies, like poisoning tools such as Glaze and Nightshade, to protect their work.
So no, this is not just a courtroom drama for legal nerds. It is the case that is going to shape whether "I trained on your portfolio" is a business model or a legal problem.
What Karla Ortiz Is Actually Asking For
It is worth zooming in on what Ortiz has said publicly, because the framing in a lot of AI coverage is just wrong. She is not arguing that AI image generation should disappear. In her Senate testimony and in interviews since, her core ask has been consistent. She wants opt in training, meaning artists have to give permission before their work gets used. She wants meaningful compensation when commercial AI products profit from that work. And she wants attribution so that people know which artists a model is actually drawing on.
She has also been clear that the big AI companies could absolutely do this if they wanted to. Adobe Firefly, for example, was trained on licensed stock and public domain content. It exists. It works. It is not a fantasy that training data can be sourced ethically. It is a choice that companies like Stability and Midjourney decided not to make in 2022 and 2023, and now they are living with the legal consequences.
The Bigger Picture For Our Community
At Real AI Girls we are fans of AI art. That is not a secret. But being a fan of the tools does not mean ignoring the humans whose labor trained them. The illustrators fighting these cases are not anti technology. Most of them use digital tools every day. They are asking for the same thing any other creative industry asks for when a new technology shows up, which is a fair seat at the table.
If you are a working illustrator reading this, keep an eye on the Andersen case as it moves through 2026. If you are an AI art creator, maybe follow artists like Karla Ortiz, Sarah Andersen, and Kelly McKernan on their actual portfolio sites and pay attention to how this plays out. And if you are somewhere in the middle, which honestly is most of us, it is totally okay to love Flux and Midjourney and also want artists to get paid for the work that trained them.
This is the conversation that is actually going to define AI art over the next five years. The model drops are fun. The court rulings are what will shape the industry the day after the next model drops.
We will keep tracking this case as it moves, and we will write it up again whenever there is a real update. Until then, hug your favorite illustrator and maybe commission something from a human.
Stay curious, stay creative, and stay kind.