News

  • Getty Images vs. Stability AI: Where Are the Limits of Exceptions?

    Getty Images is pursuing legal action against Stability AI over the alleged unauthorized use of millions of photographs in training its AI model. Stability AI defends itself by invoking exceptions for caricature, parody, and pastiche.

    A key complication is that the UK currently lacks legislation establishing a so-called TDM (text and data mining) exception. The court will therefore decide without a clearly defined legal framework, making this a precedent-setting case with potential implications across the European market.

    The judgment is expected in summer 2025. It remains to be seen whether the UK will prioritize the protection of content investments or the unrestricted development of AI.

  • NO FAKES Act: U.S. Proposes Protection of Voice and Likeness Against AI Misuse

    The U.S. Copyright Office has released a report recommending legislative measures to protect individuals’ voices and likenesses from misuse through artificial intelligence. This responds to growing concerns over so-called digital replicas and deepfakes.

    The proposed NO FAKES Act would establish federal property rights over a person’s voice and likeness. Its goal is to strengthen individual control over how personal traits are used in the digital space, including generative AI systems. If enacted, the law would set an important precedent for the protection of personal rights in the age of artificial intelligence.

  • AI and Data Mining: Can This Process Be Compared to Human Learning?

    One of the common arguments made by technology companies is that training an AI system mirrors the way humans learn. However, the difference is fundamental.

    While people take away subjective impressions from what they read or hear, AI models create precise digital copies of entire works. These copies are then analyzed and reproduced — often without the knowledge or consent of the original authors. Moreover, AI does not undergo any inner experience or cultural filtering; it lacks consciousness and personal context. Its ability to absorb and process enormous volumes of data in a short time has no true equivalent in human learning.

    This has significant implications for copyright law, as the use of protected works without a license — disguised as “learning” — can easily amount to an infringement.

  • Court Case over AI Hallucinations: The U.S. Protects Developers, Not Users

    A U.S. court has dismissed a lawsuit against OpenAI in which a radio host claimed that ChatGPT falsely stated he had embezzled money. The reason? OpenAI’s terms of service clearly warn that AI outputs may contain errors and should be treated with caution.

    The ruling implies that if an AI provider warns users about possible hallucinations, it is not liable for damages caused by false or misleading outputs. However, the situation in Europe may soon differ — the upcoming AI Act introduces stricter rules and greater accountability for AI operators.

Přehled ochrany osobních údajů

Tyto webové stránky používají soubory cookies, abychom vám mohli poskytnout co nejlepší uživatelský zážitek. Informace o souborech cookie se ukládají ve vašem prohlížeči a plní funkce, jako je rozpoznání, když se na naše webové stránky vrátíte, a pomáhají našemu týmu pochopit, které části webových stránek považujete za nejzajímavější a nejužitečnější.