AI is embedded in our daily workflows, but many of us are afraid to confront how reliant on it we’ve become.
This anxiety is understandable. Sixty-six percent of consumers are experiencing AI credibility fatigue, and 43 percent say they don’t trust much of anything anymore.
But staying silent won’t cut it. PR and communications professionals need an AI disclosure strategy to protect and strengthen their credibility. Addressing digital ethics — principles, guidelines, and values governing the use of technology — will make you stand out as a reliable and trustworthy partner to clients and journalists.
Here’s a practical framework to help you navigate PR and ethics and build trust in a world of AI-everything.
The Cost of Silence
Many PR and communications professionals are hesitant to publicly admit to using AI.Case in point: Only 20 percent of agency PR professionals disclose their use of it. Many worry that their work will be viewed as lower quality, that the audience will question their expertise, or that clients will struggle to justify the cost of their services.
They’re all valid concerns, but buyer doubt already exists. A recent PANBlast study found that 62 percent of people say they’ll be more skeptical when validating information online.
To build trust and avoid skepticism, lead with transparency and authenticity. Saying nothing sends a message: that you have something to hide. The Public Relations Society of America (PRSA) emphasizes the importance of digital ethics, reporting that masking AI authorship erodes trust, and recommends being fully forthcoming.
Proactively communicating your use of AI for PR signals confidence and accountability.
The AI for PR Disclosure Framework: What, How, and When
AI disclosure isn’t black and white: Some situations require more transparency than others. Ultimately, it comes down to whether AI usage could affect trust or impact audience expectations.
These are general AI for PR guidelines. However, each situation is nuanced, and our understanding and use of the technology is constantly evolving. As a result, it’s necessary to establish and regularly revisit your AI disclosure practices.
What to disclose: Advice on communicating your use of AI for PR
Use this chart to guide your AI disclosure decisions.
| Duty to Disclose | Description | Be explicit when using AI to: | Recommendations |
| High obligation | In any public-facing content where AI has made a meaningful contribution, or clients or journalists expect human authorship, it’s necessary to disclose your use of the technology. | Stand in for a person’s voice, image, or likeness. Generate quotes for executives or subject matter experts. Build campaign plans. Influence product endorsements. Create videos, voiceovers, and social posts. | Clearly communicate AI usage terms in client contracts and briefs. Explicitly sharing how you use the technology sets the tone from the beginning. |
| Medium obligation | You should generally disclose AI usage when AI contributes heavily to the final product. Some cases fall into a grey area, but when in doubt, more transparency is always better than less. | Source supporting data. Conduct research or gather background information. | Regularly engage with stakeholders to set and implement AI usage communication guidelines. Document these policies and revisit them on an ongoing basis. |
| Low obligation | You usually do not need to disclose AI usage when using it for ideation or brainstorming. | Workshop or refine an existing idea. Make spelling or grammar edits to emails, blogs, and reports. Note: Both of the above apply to internal conversations, rather than external ones. | While formal disclosure is not always necessary externally, internal documentation is a good idea. Work with your manager, peers, and direct reports to identify when AI usage needs to be flagged internally. |
How and when to communicate AI usage with stakeholders
PR and communications professionals should disclose AI usage to:
- Clients in contracts and deliverables
- Journalists, when AI has shaped quote materials or key sources
- Audiences, when AI is used, and human-authored content is expected
- Internal stakeholders to keep them in the loop on how you’re using AI in your day-to-day work
This information should be presented when materials are delivered, not after the fact. This applies to both internal and external audiences.
Regardless of the situation, PRSA recommends clearly disclosing AI usage with statements like:
- “This content was generated with the use of AI.”
- “AI generated 80% of this content; it was 100% reviewed by a human.”
Adapt these statements to fit the needs of your organization. What’s most important is leading with authenticity.
AI Regulatory Requirements: What You Need To Know
Regulatory requirements for AI usage are still developing, but there has been a push for increased transparency in recent years. While there are no official digital ethics federal mandates in place, many states, like California and Utah, are enacting AI usage laws, and the Federal Trade Commission (FTC) has begun enforcement of initiatives like Operation AI Comply.
AI disclosure guidelines are constantly changing, so consider where your business operates and frequently check for the latest regulations.
State and local laws
AI legislation varies greatly from state to state, but there are several laws already in effect, such as:
- The California AI Transparency Act went into effect in January 2026. It requires companies producing generative AI systems with over one million monthly visitors to help California users identify AI-generated content.
- The Utah Artificial Intelligence Policy Act requires organizations to inform consumers when they’re interacting with AI-generated content.
- The Colorado AI Act requires developers to inform consumers when they’re interacting with AI systems.
These laws don’t always explicitly target PR professionals, but many apply to organizations that employ PR and communications leaders. PR leaders who get ahead of these laws now will be in a stronger position later.
Federal initiatives
At the federal level, the FTC has enacted multiple initiatives to protect against unlawful use of AI.
For example, Operation AI Comply cracks down on businesses making deceptive marketing claims regarding the use of AI. DoNotPay, Inc., Evolv Technologies Holdings, Inc., Rytr LLC, and IntelliVision Technologies Corp. have all been involved in FTC cases for false claims and promises.
Beyond Operation AI Comply, the FTC requires disclosure of AI usage in endorsement campaigns, and they prohibit the creation or sale of reviews created by AI.
Outside of the FTC, no official federal AI laws exist in the U.S. However, executive orders have been issued and AI discussions are ongoing, so we expect the regulatory landscape to continue changing.
International regulations
If you, like us, work with clients outside the U.S., it’s necessary to review and understand the EU’s AI Act and General Data Privacy Regulation (GDPR) policies to ensure regulatory compliance.
The EU’s AI Act includes a set of rules that’s meant to foster trustworthy AI practices. GDPR outlines privacy regulations in the EU and now includes language that requires consent from users to use their data in the training of AI models.
Use AI Responsibly to Boost Your Credibility
AI disclosure isn’t a suggestion; it’s a necessity.
PR and communications professionals who stay silent on AI usage risk losing trust — and potentially face legal repercussions.
Now is the time to develop your digital ethics framework, and PAN is here to help.
We believe in impact over output, and using the right mix of technology, PR expertise, and data-driven storytelling to help clients build credibility. At PAN, we practice what we preach and disclose how we bring AI and human intelligence together to craft great stories that drive awareness and build pipeline.
Want to learn more? Let’s talk.
