Ethics & Responsible Use
Ethical considerations in AI
The prompts you write shape how AI behaves. A well-crafted prompt can educate, assist, and empower. A careless one can deceive, discriminate, or cause harm. As prompt engineers, we're not just users—we're designers of AI behavior, and that comes with real responsibility.
This chapter isn't about rules imposed from above. It's about understanding the impact of our choices and building habits that lead to AI use we can be proud of.
AI amplifies whatever it's given. A biased prompt produces biased outputs at scale. A deceptive prompt enables deception at scale. The ethical implications of prompt engineering grow with every new capability these systems gain.
Ethical Foundations
Every decision in prompt engineering connects to a few core principles:
Don't use AI to deceive people or create misleading content
No fake reviews, impersonation, or manufactured 'evidence'
Actively work to avoid perpetuating biases and stereotypes
Test prompts across demographics, request diverse perspectives
Be clear about AI involvement when it matters
Disclose AI assistance in published work, professional contexts
Protect personal information in prompts and outputs
Anonymize data, avoid including PII, understand data policies
Design prompts that prevent harmful outputs
Build in guardrails, test for edge cases, handle refusals gracefully
Take responsibility for what your prompts produce
Review outputs, fix issues, maintain human oversight
The Prompt Engineer's Role
You have more influence than you might realize:
- What AI produces: Your prompts determine the content, tone, and quality of outputs
- How AI interacts: Your system prompts shape personality, boundaries, and user experience
- What safeguards exist: Your design choices determine what the AI will and won't do
- How mistakes are handled: Your error handling determines whether failures are graceful or harmful
Avoiding Harmful Outputs
The most fundamental ethical obligation is preventing your prompts from causing harm.
Categories of Harmful Content
Instructions that could lead to physical harm
Weapons creation, self-harm, violence against others
Content that facilitates breaking laws
Fraud schemes, hacking instructions, drug synthesis
Content targeting individuals or groups
Discriminatory content, doxxing, targeted harassment
Deliberately false or misleading content
Fake news, health misinformation, conspiracy content
Exposing or exploiting personal information
Revealing private data, stalking assistance
Content that exploits vulnerable individuals
CSAM, non-consensual intimate content, scams targeting elderly
CSAM stands for Child Sexual Abuse Material. Creating, distributing, or possessing such content is illegal worldwide. AI systems must never generate content depicting minors in sexual situations, and responsible prompt engineers actively build safeguards against such misuse.
Building Safety Into Prompts
When building AI systems, include explicit safety guidelines:
A template for building safety guidelines into your AI systems.
You are a helpful assistant for ${purpose}.
## SAFETY GUIDELINES
**Content Restrictions**:
- Never provide instructions that could cause physical harm
- Decline requests for illegal information or activities
- Don't generate discriminatory or hateful content
- Don't create deliberately misleading information
**When You Must Decline**:
- Acknowledge you understood the request
- Briefly explain why you can't help with this specific thing
- Offer constructive alternatives when possible
- Be respectful—don't lecture or be preachy
**When Uncertain**:
- Ask clarifying questions about intent
- Err on the side of caution
- Suggest the user consult appropriate professionals
Now, please help the user with: ${userRequest}The Intent vs. Impact Framework
Not every sensitive request is malicious. Use this framework for ambiguous cases:
Work through ambiguous requests to determine the appropriate response.
I received this request that might be sensitive:
"${sensitiveRequest}"
Help me think through whether and how to respond:
**1. Intent Analysis**
- What are the most likely reasons someone would ask this?
- Could this be legitimate? (research, fiction, education, professional need)
- Are there red flags suggesting malicious intent?
**2. Impact Assessment**
- What's the worst case if this information is misused?
- How accessible is this information elsewhere?
- Does providing it meaningfully increase risk?
**3. Recommendation**
Based on this analysis:
- Should I respond, decline, or ask for clarification?
- If responding, what safeguards should I include?
- If declining, how should I phrase it helpfully?Addressing Bias
AI models inherit biases from their training data—historical inequities, representation gaps, cultural assumptions, and linguistic patterns. As prompt engineers, we can either amplify these biases or actively counteract them.
How Bias Manifests
The model assumes certain demographics for roles
Doctors defaulting to male, nurses to female
Reinforcing cultural stereotypes in descriptions
Associating certain ethnicities with specific traits
Some groups are underrepresented or misrepresented
Limited accurate information about minority cultures
Perspectives skewed toward Western culture and values
Assuming Western norms are universal
Testing for Bias
Use this to test your prompts for potential bias issues.
I want to test this prompt for bias:
"${promptToTest}"
Run these bias checks:
**1. Demographic Variation Test**
Run the prompt with different demographic descriptors (gender, ethnicity, age, etc.) and note any differences in:
- Tone or respect level
- Assumed competence or capabilities
- Stereotypical associations
**2. Default Assumption Check**
When demographics aren't specified:
- What does the model assume?
- Are these assumptions problematic?
**3. Representation Analysis**
- Are different groups represented fairly?
- Are any groups missing or marginalized?
**4. Recommendations**
Based on findings, suggest prompt modifications to reduce bias.Mitigating Bias in Practice
Bias-prone prompt
Describe a typical CEO.
Bias-aware prompt
Describe a CEO. Vary demographics across examples, and avoid defaulting to any particular gender, ethnicity, or age.
Transparency and Disclosure
When should you tell people AI was involved? The answer depends on context—but the trend is toward more disclosure, not less.
When Disclosure Matters
Articles, posts, or content shared publicly
Blog posts, social media, marketing materials
When AI outputs affect people's lives
Hiring recommendations, medical info, legal guidance
Where authenticity is expected or valued
Personal correspondence, testimonials, reviews
Workplace or academic environments
Reports, research, client deliverables
How to Disclose Appropriately
Hidden AI involvement
Here's my analysis of the market trends...
Transparent disclosure
I used AI tools to help analyze the data and draft this report. All conclusions have been verified and edited by me.
Common disclosure phrases that work well:
- "Written with AI assistance"
- "AI-generated first draft, human edited"
- "Analysis performed using AI tools"
- "Created with AI, reviewed and approved by [name]"
Privacy Considerations
Every prompt you send contains data. Understanding where that data goes—and what shouldn't be in it—is essential.
What Never Belongs in Prompts
Names, addresses, phone numbers, SSNs
Use [CUSTOMER] instead of 'John Smith'
Account numbers, credit cards, income details
Describe the pattern, not the actual numbers
Medical records, diagnoses, prescriptions
Ask about conditions generally, not specific patients
Passwords, API keys, tokens, secrets
Never paste credentials—use placeholders
Personal emails, messages, confidential docs
Summarize the situation without quoting private text
Safe Data Handling Pattern
Unsafe: Contains PII
Summarize this complaint from John Smith at 123 Main St, Anytown about order #12345: 'I ordered on March 15 and still haven't received...'
Safe: Anonymized
Summarize this customer complaint pattern: A customer ordered 3 weeks ago, hasn't received their order, and has contacted support twice without resolution.
PII stands for Personally Identifiable Information—any data that can identify a specific individual. This includes names, addresses, phone numbers, email addresses, Social Security numbers, financial account numbers, and even combinations of data (like job title + company + city) that could identify someone. When prompting AI, always anonymize or remove PII to protect privacy.
Use this to identify and remove sensitive information before including text in prompts.
Review this text for sensitive information that should be removed before using it in an AI prompt:
"${textToReview}"
Identify:
1. **Personal Identifiers**: Names, addresses, phone numbers, emails, SSNs
2. **Financial Data**: Account numbers, amounts that could identify someone
3. **Health Information**: Medical details, conditions, prescriptions
4. **Credentials**: Any passwords, keys, or tokens
5. **Private Details**: Information someone would reasonably expect to be confidential
For each item found, suggest how to anonymize or generalize it while preserving the information needed for the task.Authenticity and Deception
There's a difference between using AI as a tool and using AI to deceive.
The Legitimacy Line
AI as a tool to enhance your work
Drafting, brainstorming, editing, learning
Context-dependent, requires judgment
Ghostwriting, templates, automated responses
Misrepresenting AI work as human-original
Fake reviews, academic fraud, impersonation
Key questions to ask:
- Would the recipient expect this to be original human work?
- Am I gaining unfair advantage through deception?
- Would disclosure change how the work is received?
Synthetic Media Responsibility
Creating realistic depictions of real people—whether images, audio, or video—carries special obligations:
- Never create realistic depictions without consent
- Always label synthetic media clearly
- Consider potential for misuse before creating
- Refuse to create non-consensual intimate imagery
Responsible Deployment
When building AI features for others to use, your ethical obligations multiply.
Pre-Deployment Checklist
Human Oversight Principles
Humans review decisions that significantly affect people
Hiring, medical, legal, financial recommendations
Mechanisms exist to catch and fix AI mistakes
User feedback, quality sampling, appeals process
Insights from issues improve the system
Post-mortems, prompt updates, training improvements
Humans can intervene when AI fails
Manual review queues, escalation paths
Special Context Guidelines
Some domains require extra care due to their potential for harm or the vulnerability of those involved.
Healthcare
Template for AI systems that might receive health-related queries.
You are an AI assistant. When users ask about health or medical topics:
**Always**:
- Recommend consulting a qualified healthcare provider for personal medical decisions
- Provide general educational information, not personalized medical advice
- Include disclaimers that you cannot diagnose conditions
- Suggest emergency services (911) for urgent situations
**Never**:
- Provide specific diagnoses
- Recommend specific medications or dosages
- Discourage someone from seeking professional care
- Make claims about treatments without noting uncertainty
User question: ${healthQuestion}
Respond helpfully while following these guidelines.Legal and Financial
These domains have regulatory implications and require appropriate disclaimers:
Provide general information, not legal advice
"This is general information. For your specific situation, consult a licensed attorney."
Educate without providing personal financial advice
"This is educational. Consider consulting a financial advisor for your situation."
Laws vary by location
"Laws differ by state/country. Verify requirements for your jurisdiction."
Children and Education
Ensure outputs are suitable for the age group
Filter mature content, use appropriate language
Support learning, don't replace it
Explain concepts rather than writing essays for students
Extra protection for vulnerable users
Stricter content filters, no personal data collection
Self-Assessment
Before deploying any prompt or AI system, run through these questions:
A user asks your AI system how to 'get rid of someone who's bothering them.' What's the most appropriate response strategy?