Data and Content From the Perspective of Responsible AI Use

Florian Spengler

The rapid development of artificial intelligence presents companies with enormous opportunities and challenges. While AI technologies drive innovation, ethical and regulatory issues arise around data protection, copyright and responsibility. Here we’ll explain from a strategic perspective how companies can exploit the full potential of AI while ensuring that their data and content remain protected.

AI development: Speed and scale

The innovation cycles in the field of AI are drastically shortening. While digital and social networks are developing more slowly, AI is experiencing an exponential growth phase. One example of this is the rapid spread of ChatGPT, which reached 100 million users in just two months. The increasing integration of AI into day-to-day business is particularly evident in software development, where 63% of developers already use AI-supported tools.

These developments are based on three central pillars: Technological innovations (neural networks and transformer architectures), data availability and immense computing capacities. Companies like OpenAI collect not only publicly available data, but increasingly also usage and company data, in order to further optimize their models.

Innovation Cycles

Responsible AI as a strategic framework

The rapid growth of AI technology is also creating ethical challenges. Data protection, transparency and fairness must be maintained to ensure long-term trust. This is precisely where the concept of “Responsible AI” comes in: It defines guidelines for the ethical and responsible use of AI.

The central principles are:

  • Data protection
    The protection of personal data must be guaranteed.
  • Security
    Protection against misuse and manipulation is essential.
  • Inclusion
    AI models must be designed for diversity to avoid bias and discrimination.
  • Transparency
    Decisions made by AI systems should be traceable.

Although many companies formulate corresponding guiding principles, they have difficulties consistently implementing them in their development processes and when using AI in their day-to-day work. Open letters from AI researchers pointing out risks such as data protection violations, bias or problematic copyright practices underscore the urgency of the issue.

“Responsible AI means balancing innovation and ethics. Only in this way can it create sustainable added value.”
Florian Spengler

Regulatory challenges: GDPR and copyright

The European General Data Protection Regulation (GDPR) sets high standards for the protection of personal data. The EU AI Act, which came into force in 2024, classifies AI systems according to risk levels and defines strict requirements for high-risk applications. While providers such as Meta train their AI models with publicly available data, the problem remains of how to subsequently remove data once it has been integrated.

The use of copyrighted content in AI training is particularly controversial. Providers argue “fair use” and public benefit, while rights holders insist on explicit permission. One possible solution lies in licensing models that enable companies to use content legally and allow creators to be fairly compensated.

EU AI Act - Classification

Hallucinations and misinformation as a risk

A key problem of generative AI is so-called “hallucinations” – which is simply false or invented information. Since AI models do not understand, but only recognize patterns, incorrect or misleading results occur time and time again. This applies not only to texts, but increasingly also to images, audio and video, which can make it more difficult to recognize misinformation.

For example, AI-generated fake news or manipulated images can easily be spread on social media. Even scientific publications or legal documents can be affected by inaccurate AI results. Companies should therefore establish quality assurance mechanisms, such as human review or a selection of AI models that are open source, and also disclose the training data.

To the priint:day magazine!

Order a priint:day magazine directly to your home now!

Or read through our digital magazine!

Key takeaways:

  • Responsible AI as a framework
    The responsible AI framework serves as a guide for ethical AI development, principles such as data protection, transparency, inclusivity and accounta­bility. AI must be guided by human values and adhere to regulations such as the GDPR & EU AI Act to protect the rights of individuals.
  • Balance between innovation and regulation
    The growth of AI offers exciting possibilities, but also requires careful navigation of the regulatory landscape. The GDPR sets a high standard for data and AI must meet it.
  • Copyright and data use
    The ideal approach to using copyrighted data in AI training is to strike a balance between fair use and formal license agreements. By working with content creators, AI developers can innovate while respecting intellectual property rights, creating a more ethical and sustainable ecosystem.

Conclusion: Balancing innovation and regulation

The use of AI offers companies enormous opportunities, but it also requires a responsible approach. Adherence to ethical principles and regulatory requirements is not only a legal necessity, but also a crucial factor for trust and market success. By taking a strategic approach that combines innovation with responsibility, companies can exploit the full potential of AI without taking risks.

Watch the onstage presentation

Please accept cookies to watch this video.

How can we help you?