Here’s what we’re committing to:
1. Human creativity comes first
AI will augment – not replace – human creativity. We will use it to accelerate research, ideation, drafting, production support, and experimentation, while keeping strategic thinking, editorial judgment, design direction, and final client accountability in human hands.
2. People remain accountable
AI can assist, but it cannot own responsibility. Big Top people remain accountable for the quality, legality, fairness, and consequences of every decision and deliverable. We will not permit fully autonomous client-facing decisions or fully autonomous publishing.
3. Client trust above speed
Efficiency matters, but trust matters more. We will always choose accuracy, care, and honesty over faster output. We will disclose AI involvement to clients and label or watermark synthetic media where relevant to the medium, the audience, and the context.
4. Truth, dignity, and consent are non-negotiable
We will not create or distribute deceptive or undisclosed synthetic media. We will not impersonate people, clone voices without explicit permission, or use AI in ways that distort reality, remove consent, or undermine dignity. We will not deploy AI that undermines dignity, truth, or consent.
5. Privacy, confidentiality, and data protection matter
We will protect personal data, user data, confidential information, and client materials with the highest care. Only public data may be used in public AI tools. We will comply with GDPR and other applicable Irish and EU obligations, and we will never trade privacy for convenience.
6. Originality and intellectual property deserve respect
We will use AI in ways that support original thinking, not shortcut it. We will verify originality before client delivery, avoid tools or datasets with questionable provenance unless approved, and avoid prompts that imitate living artists’ recognisable styles without rights or permission. We will protect both client IP and Big Top IP in prompts, workflows, and outputs.
7. Fairness, inclusion, and accessibility are part of quality
We will actively test for bias, harmful stereotypes, exclusion, and accessibility failures. Our goal is not only efficient AI, but better AI – work that respects people across backgrounds, abilities, identities, and cultures.
8. Responsible leadership requires continuous learning
Big Top has been incorporating AI into its workflows since 2022. We believe leadership means combining experimentation with discipline. We will train our teams, maintain approved tools, review our practices, report issues, and improve continuously as technology, regulation, and client expectations evolve.
What this means in practice
- Every AI-assisted client deliverable will be reviewed by a human before release.
- Sensitive, synthetic-media-heavy, or otherwise novel uses require manager sign-off.
- Only public data may be entered into public AI tools, and no confidential data may be entered into unmanaged tools.
- Clients will be told when AI has materially contributed to a deliverable.
- Misuse of AI may trigger corrective action under company policy.
At Big Top Multimedia, we believe the future of AI should be human-led, creatively ambitious, ethically grounded, and worthy of trust. This manifesto is our commitment to build that future with care