This skill provides methodology and best practices for researching sales prospects.
---
name: sales-research
description: This skill provides methodology and best practices for researching sales prospects.
---
# Sales Research
## Overview
This skill provides methodology and best practices for researching sales prospects. It covers company research, contact profiling, and signal detection to surface actionable intelligence.
## Usage
The company-researcher and contact-researcher sub-agents reference this skill when:
- Researching new prospects
- Finding company information
- Profiling individual contacts
- Detecting buying signals
## Research Methodology
### Company Research Checklist
1. **Basic Profile**
- Company name, industry, size (employees, revenue)
- Headquarters and key locations
- Founded date, growth stage
2. **Recent Developments**
- Funding announcements (last 12 months)
- M&A activity
- Leadership changes
- Product launches
3. **Tech Stack**
- Known technologies (BuiltWith, StackShare)
- Job postings mentioning tools
- Integration partnerships
4. **Signals**
- Job postings (scaling = opportunity)
- Glassdoor reviews (pain points)
- News mentions (context)
- Social media activity
### Contact Research Checklist
1. **Professional Background**
- Current role and tenure
- Previous companies and roles
- Education
2. **Influence Indicators**
- Reporting structure
- Decision-making authority
- Budget ownership
3. **Engagement Hooks**
- Recent LinkedIn posts
- Published articles
- Speaking engagements
- Mutual connections
## Resources
- `resources/signal-indicators.md` - Taxonomy of buying signals
- `resources/research-checklist.md` - Complete research checklist
## Scripts
- `scripts/company-enricher.py` - Aggregate company data from multiple sources
- `scripts/linkedin-parser.py` - Structure LinkedIn profile data
FILE:company-enricher.py
#!/usr/bin/env python3
"""
company-enricher.py - Aggregate company data from multiple sources
Inputs:
- company_name: string
- domain: string (optional)
Outputs:
- profile:
name: string
industry: string
size: string
funding: string
tech_stack: [string]
recent_news: [news items]
Dependencies:
- requests, beautifulsoup4
"""
# Requirements: requests, beautifulsoup4
import json
from typing import Any
from dataclasses import dataclass, asdict
from datetime import datetime
@dataclass
class NewsItem:
title: str
date: str
source: str
url: str
summary: str
@dataclass
class CompanyProfile:
name: str
domain: str
industry: str
size: str
location: str
founded: str
funding: str
tech_stack: list[str]
recent_news: list[dict]
competitors: list[str]
description: str
def search_company_info(company_name: str, domain: str = None) -> dict:
"""
Search for basic company information.
In production, this would call APIs like Clearbit, Crunchbase, etc.
"""
# TODO: Implement actual API calls
# Placeholder return structure
return {
"name": company_name,
"domain": domain or f"{company_name.lower().replace(' ', '')}.com",
"industry": "Technology", # Would come from API
"size": "Unknown",
"location": "Unknown",
"founded": "Unknown",
"description": f"Information about {company_name}"
}
def search_funding_info(company_name: str) -> dict:
"""
Search for funding information.
In production, would call Crunchbase, PitchBook, etc.
"""
# TODO: Implement actual API calls
return {
"total_funding": "Unknown",
"last_round": "Unknown",
"last_round_date": "Unknown",
"investors": []
}
def search_tech_stack(domain: str) -> list[str]:
"""
Detect technology stack.
In production, would call BuiltWith, Wappalyzer, etc.
"""
# TODO: Implement actual API calls
return []
def search_recent_news(company_name: str, days: int = 90) -> list[dict]:
"""
Search for recent news about the company.
In production, would call news APIs.
"""
# TODO: Implement actual API calls
return []
def main(
company_name: str,
domain: str = None
) -> dict[str, Any]:
"""
Aggregate company data from multiple sources.
Args:
company_name: Company name to research
domain: Company domain (optional, will be inferred)
Returns:
dict with company profile including industry, size, funding, tech stack, news
"""
# Get basic company info
basic_info = search_company_info(company_name, domain)
# Get funding information
funding_info = search_funding_info(company_name)
# Detect tech stack
company_domain = basic_info.get("domain", domain)
tech_stack = search_tech_stack(company_domain) if company_domain else []
# Get recent news
news = search_recent_news(company_name)
# Compile profile
profile = CompanyProfile(
name=basic_info["name"],
domain=basic_info["domain"],
industry=basic_info["industry"],
size=basic_info["size"],
location=basic_info["location"],
founded=basic_info["founded"],
funding=funding_info.get("total_funding", "Unknown"),
tech_stack=tech_stack,
recent_news=news,
competitors=[], # Would be enriched from industry analysis
description=basic_info["description"]
)
return {
"profile": asdict(profile),
"funding_details": funding_info,
"enriched_at": datetime.now().isoformat(),
"sources_checked": ["company_info", "funding", "tech_stack", "news"]
}
if __name__ == "__main__":
import sys
# Example usage
result = main(
company_name="DataFlow Systems",
domain="dataflow.io"
)
print(json.dumps(result, indent=2))
FILE:linkedin-parser.py
#!/usr/bin/env python3
"""
linkedin-parser.py - Structure LinkedIn profile data
Inputs:
- profile_url: string
- or name + company: strings
Outputs:
- contact:
name: string
title: string
tenure: string
previous_roles: [role objects]
mutual_connections: [string]
recent_activity: [post summaries]
Dependencies:
- requests
"""
# Requirements: requests
import json
from typing import Any
from dataclasses import dataclass, asdict
from datetime import datetime
@dataclass
class PreviousRole:
title: str
company: str
duration: str
description: str
@dataclass
class RecentPost:
date: str
content_preview: str
engagement: int
topic: str
@dataclass
class ContactProfile:
name: str
title: str
company: str
location: str
tenure: str
previous_roles: list[dict]
education: list[str]
mutual_connections: list[str]
recent_activity: list[dict]
profile_url: str
headline: str
def search_linkedin_profile(name: str = None, company: str = None, profile_url: str = None) -> dict:
"""
Search for LinkedIn profile information.
In production, would use LinkedIn API or Sales Navigator.
"""
# TODO: Implement actual LinkedIn API integration
# Note: LinkedIn's API has strict terms of service
return {
"found": False,
"name": name or "Unknown",
"title": "Unknown",
"company": company or "Unknown",
"location": "Unknown",
"headline": "",
"tenure": "Unknown",
"profile_url": profile_url or ""
}
def get_career_history(profile_data: dict) -> list[dict]:
"""
Extract career history from profile.
"""
# TODO: Implement career extraction
return []
def get_mutual_connections(profile_data: dict, user_network: list = None) -> list[str]:
"""
Find mutual connections.
"""
# TODO: Implement mutual connection detection
return []
def get_recent_activity(profile_data: dict, days: int = 30) -> list[dict]:
"""
Get recent posts and activity.
"""
# TODO: Implement activity extraction
return []
def main(
name: str = None,
company: str = None,
profile_url: str = None
) -> dict[str, Any]:
"""
Structure LinkedIn profile data for sales prep.
Args:
name: Person's name
company: Company they work at
profile_url: Direct LinkedIn profile URL
Returns:
dict with structured contact profile
"""
if not profile_url and not (name and company):
return {"error": "Provide either profile_url or name + company"}
# Search for profile
profile_data = search_linkedin_profile(
name=name,
company=company,
profile_url=profile_url
)
if not profile_data.get("found"):
return {
"found": False,
"name": name or "Unknown",
"company": company or "Unknown",
"message": "Profile not found or limited access",
"suggestions": [
"Try searching directly on LinkedIn",
"Check for alternative spellings",
"Verify the person still works at this company"
]
}
# Get career history
previous_roles = get_career_history(profile_data)
# Find mutual connections
mutual_connections = get_mutual_connections(profile_data)
# Get recent activity
recent_activity = get_recent_activity(profile_data)
# Compile contact profile
contact = ContactProfile(
name=profile_data["name"],
title=profile_data["title"],
company=profile_data["company"],
location=profile_data["location"],
tenure=profile_data["tenure"],
previous_roles=previous_roles,
education=[], # Would be extracted from profile
mutual_connections=mutual_connections,
recent_activity=recent_activity,
profile_url=profile_data["profile_url"],
headline=profile_data["headline"]
)
return {
"found": True,
"contact": asdict(contact),
"research_date": datetime.now().isoformat(),
"data_completeness": calculate_completeness(contact)
}
def calculate_completeness(contact: ContactProfile) -> dict:
"""Calculate how complete the profile data is."""
fields = {
"basic_info": bool(contact.name and contact.title and contact.company),
"career_history": len(contact.previous_roles) > 0,
"mutual_connections": len(contact.mutual_connections) > 0,
"recent_activity": len(contact.recent_activity) > 0,
"education": len(contact.education) > 0
}
complete_count = sum(fields.values())
return {
"fields": fields,
"score": f"{complete_count}/{len(fields)}",
"percentage": int((complete_count / len(fields)) * 100)
}
if __name__ == "__main__":
import sys
# Example usage
result = main(
name="Sarah Chen",
company="DataFlow Systems"
)
print(json.dumps(result, indent=2))
FILE:priority-scorer.py
#!/usr/bin/env python3
"""
priority-scorer.py - Calculate and rank prospect priorities
Inputs:
- prospects: [prospect objects with signals]
- weights: {deal_size, timing, warmth, signals}
Outputs:
- ranked: [prospects with scores and reasoning]
Dependencies:
- (none - pure Python)
"""
import json
from typing import Any
from dataclasses import dataclass
# Default scoring weights
DEFAULT_WEIGHTS = {
"deal_size": 0.25,
"timing": 0.30,
"warmth": 0.20,
"signals": 0.25
}
# Signal score mapping
SIGNAL_SCORES = {
# High-intent signals
"recent_funding": 10,
"leadership_change": 8,
"job_postings_relevant": 9,
"expansion_news": 7,
"competitor_mention": 6,
# Medium-intent signals
"general_hiring": 4,
"industry_event": 3,
"content_engagement": 3,
# Relationship signals
"mutual_connection": 5,
"previous_contact": 6,
"referred_lead": 8,
# Negative signals
"recent_layoffs": -3,
"budget_freeze_mentioned": -5,
"competitor_selected": -7,
}
@dataclass
class ScoredProspect:
company: str
contact: str
call_time: str
raw_score: float
normalized_score: int
priority_rank: int
score_breakdown: dict
reasoning: str
is_followup: bool
def score_deal_size(prospect: dict) -> tuple[float, str]:
"""Score based on estimated deal size."""
size_indicators = prospect.get("size_indicators", {})
employee_count = size_indicators.get("employees", 0)
revenue_estimate = size_indicators.get("revenue", 0)
# Simple scoring based on company size
if employee_count > 1000 or revenue_estimate > 100_000_000:
return 10.0, "Enterprise-scale opportunity"
elif employee_count > 200 or revenue_estimate > 20_000_000:
return 7.0, "Mid-market opportunity"
elif employee_count > 50:
return 5.0, "SMB opportunity"
else:
return 3.0, "Small business"
def score_timing(prospect: dict) -> tuple[float, str]:
"""Score based on timing signals."""
timing_signals = prospect.get("timing_signals", [])
score = 5.0 # Base score
reasons = []
for signal in timing_signals:
if signal == "budget_cycle_q4":
score += 3
reasons.append("Q4 budget planning")
elif signal == "contract_expiring":
score += 4
reasons.append("Contract expiring soon")
elif signal == "active_evaluation":
score += 5
reasons.append("Actively evaluating")
elif signal == "just_funded":
score += 3
reasons.append("Recently funded")
return min(score, 10.0), "; ".join(reasons) if reasons else "Standard timing"
def score_warmth(prospect: dict) -> tuple[float, str]:
"""Score based on relationship warmth."""
relationship = prospect.get("relationship", {})
if relationship.get("is_followup"):
last_outcome = relationship.get("last_outcome", "neutral")
if last_outcome == "positive":
return 9.0, "Warm follow-up (positive last contact)"
elif last_outcome == "neutral":
return 7.0, "Follow-up (neutral last contact)"
else:
return 5.0, "Follow-up (needs re-engagement)"
if relationship.get("referred"):
return 8.0, "Referred lead"
if relationship.get("mutual_connections", 0) > 0:
return 6.0, f"{relationship['mutual_connections']} mutual connections"
if relationship.get("inbound"):
return 7.0, "Inbound interest"
return 4.0, "Cold outreach"
def score_signals(prospect: dict) -> tuple[float, str]:
"""Score based on buying signals detected."""
signals = prospect.get("signals", [])
total_score = 0
signal_reasons = []
for signal in signals:
signal_score = SIGNAL_SCORES.get(signal, 0)
total_score += signal_score
if signal_score > 0:
signal_reasons.append(signal.replace("_", " "))
# Normalize to 0-10 scale
normalized = min(max(total_score / 2, 0), 10)
reason = f"Signals: {', '.join(signal_reasons)}" if signal_reasons else "No strong signals"
return normalized, reason
def calculate_priority_score(
prospect: dict,
weights: dict = None
) -> ScoredProspect:
"""Calculate overall priority score for a prospect."""
weights = weights or DEFAULT_WEIGHTS
# Calculate component scores
deal_score, deal_reason = score_deal_size(prospect)
timing_score, timing_reason = score_timing(prospect)
warmth_score, warmth_reason = score_warmth(prospect)
signal_score, signal_reason = score_signals(prospect)
# Weighted total
raw_score = (
deal_score * weights["deal_size"] +
timing_score * weights["timing"] +
warmth_score * weights["warmth"] +
signal_score * weights["signals"]
)
# Compile reasoning
reasons = []
if timing_score >= 8:
reasons.append(timing_reason)
if signal_score >= 7:
reasons.append(signal_reason)
if warmth_score >= 7:
reasons.append(warmth_reason)
if deal_score >= 8:
reasons.append(deal_reason)
return ScoredProspect(
company=prospect.get("company", "Unknown"),
contact=prospect.get("contact", "Unknown"),
call_time=prospect.get("call_time", "Unknown"),
raw_score=round(raw_score, 2),
normalized_score=int(raw_score * 10),
priority_rank=0, # Will be set after sorting
score_breakdown={
"deal_size": {"score": deal_score, "reason": deal_reason},
"timing": {"score": timing_score, "reason": timing_reason},
"warmth": {"score": warmth_score, "reason": warmth_reason},
"signals": {"score": signal_score, "reason": signal_reason}
},
reasoning="; ".join(reasons) if reasons else "Standard priority",
is_followup=prospect.get("relationship", {}).get("is_followup", False)
)
def main(
prospects: list[dict],
weights: dict = None
) -> dict[str, Any]:
"""
Calculate and rank prospect priorities.
Args:
prospects: List of prospect objects with signals
weights: Optional custom weights for scoring components
Returns:
dict with ranked prospects and scoring details
"""
weights = weights or DEFAULT_WEIGHTS
# Score all prospects
scored = [calculate_priority_score(p, weights) for p in prospects]
# Sort by raw score descending
scored.sort(key=lambda x: x.raw_score, reverse=True)
# Assign ranks
for i, prospect in enumerate(scored, 1):
prospect.priority_rank = i
# Convert to dicts for JSON serialization
ranked = []
for s in scored:
ranked.append({
"company": s.company,
"contact": s.contact,
"call_time": s.call_time,
"priority_rank": s.priority_rank,
"score": s.normalized_score,
"reasoning": s.reasoning,
"is_followup": s.is_followup,
"breakdown": s.score_breakdown
})
return {
"ranked": ranked,
"weights_used": weights,
"total_prospects": len(prospects)
}
if __name__ == "__main__":
import sys
# Example usage
example_prospects = [
{
"company": "DataFlow Systems",
"contact": "Sarah Chen",
"call_time": "2pm",
"size_indicators": {"employees": 200, "revenue": 25_000_000},
"timing_signals": ["just_funded", "active_evaluation"],
"signals": ["recent_funding", "job_postings_relevant"],
"relationship": {"is_followup": False, "mutual_connections": 2}
},
{
"company": "Acme Manufacturing",
"contact": "Tom Bradley",
"call_time": "10am",
"size_indicators": {"employees": 500},
"timing_signals": ["contract_expiring"],
"signals": [],
"relationship": {"is_followup": True, "last_outcome": "neutral"}
},
{
"company": "FirstRate Financial",
"contact": "Linda Thompson",
"call_time": "4pm",
"size_indicators": {"employees": 300},
"timing_signals": [],
"signals": [],
"relationship": {"is_followup": False}
}
]
result = main(prospects=example_prospects)
print(json.dumps(result, indent=2))
FILE:research-checklist.md
# Prospect Research Checklist
## Company Research
### Basic Information
- [ ] Company name (verify spelling)
- [ ] Industry/vertical
- [ ] Headquarters location
- [ ] Employee count (LinkedIn, website)
- [ ] Revenue estimate (if available)
- [ ] Founded date
- [ ] Funding stage/history
### Recent News (Last 90 Days)
- [ ] Funding announcements
- [ ] Acquisitions or mergers
- [ ] Leadership changes
- [ ] Product launches
- [ ] Major customer wins
- [ ] Press mentions
- [ ] Earnings/financial news
### Digital Footprint
- [ ] Website review
- [ ] Blog/content topics
- [ ] Social media presence
- [ ] Job postings (careers page + LinkedIn)
- [ ] Tech stack (BuiltWith, job postings)
### Competitive Landscape
- [ ] Known competitors
- [ ] Market position
- [ ] Differentiators claimed
- [ ] Recent competitive moves
### Pain Point Indicators
- [ ] Glassdoor reviews (themes)
- [ ] G2/Capterra reviews (if B2B)
- [ ] Social media complaints
- [ ] Job posting patterns
## Contact Research
### Professional Profile
- [ ] Current title
- [ ] Time in role
- [ ] Time at company
- [ ] Previous companies
- [ ] Previous roles
- [ ] Education
### Decision Authority
- [ ] Reports to whom
- [ ] Team size (if manager)
- [ ] Budget authority (inferred)
- [ ] Buying involvement history
### Engagement Hooks
- [ ] Recent LinkedIn posts
- [ ] Published articles
- [ ] Podcast appearances
- [ ] Conference talks
- [ ] Mutual connections
- [ ] Shared interests/groups
### Communication Style
- [ ] Post tone (formal/casual)
- [ ] Topics they engage with
- [ ] Response patterns
## CRM Check (If Available)
- [ ] Any prior touchpoints
- [ ] Previous opportunities
- [ ] Related contacts at company
- [ ] Notes from colleagues
- [ ] Email engagement history
## Time-Based Research Depth
| Time Available | Research Depth |
|----------------|----------------|
| 5 minutes | Company basics + contact title only |
| 15 minutes | + Recent news + LinkedIn profile |
| 30 minutes | + Pain point signals + engagement hooks |
| 60 minutes | Full checklist + competitive analysis |
FILE:signal-indicators.md
# Signal Indicators Reference
## High-Intent Signals
### Job Postings
- **3+ relevant roles posted** = Active initiative, budget allocated
- **Senior hire in your domain** = Strategic priority
- **Urgency language ("ASAP", "immediate")** = Pain is acute
- **Specific tool mentioned** = Competitor or category awareness
### Financial Events
- **Series B+ funding** = Growth capital, buying power
- **IPO preparation** = Operational maturity needed
- **Acquisition announced** = Integration challenges coming
- **Revenue milestone PR** = Budget available
### Leadership Changes
- **New CXO in your domain** = 90-day priority setting
- **New CRO/CMO** = Tech stack evaluation likely
- **Founder transition to CEO** = Professionalizing operations
## Medium-Intent Signals
### Expansion Signals
- **New office opening** = Infrastructure needs
- **International expansion** = Localization, compliance
- **New product launch** = Scaling challenges
- **Major customer win** = Delivery pressure
### Technology Signals
- **RFP published** = Active buying process
- **Vendor review mentioned** = Comparison shopping
- **Tech stack change** = Integration opportunity
- **Legacy system complaints** = Modernization need
### Content Signals
- **Blog post on your topic** = Educating themselves
- **Webinar attendance** = Interest confirmed
- **Whitepaper download** = Problem awareness
- **Conference speaking** = Thought leadership, visibility
## Low-Intent Signals (Nurture)
### General Activity
- **Industry event attendance** = Market participant
- **Generic hiring** = Company growing
- **Positive press** = Healthy company
- **Social media activity** = Engaged leadership
## Signal Scoring
| Signal Type | Score | Action |
|-------------|-------|--------|
| Job posting (relevant) | +3 | Prioritize outreach |
| Recent funding | +3 | Reference in conversation |
| Leadership change | +2 | Time-sensitive opportunity |
| Expansion news | +2 | Growth angle |
| Negative reviews | +2 | Pain point angle |
| Content engagement | +1 | Nurture track |
| No signals | 0 | Discovery focus |Generate an in-depth account research report by analyzing a company's website and external data sources. Tailored for Account Executives, Investors, or Partnership Managers, this prompt involves validating company information, performing web analysis, cross-referencing external data, and synthesizing intelligence into a structured Markdown report. It emphasizes strategic insights, verified facts, and actionable intelligence for informed business decisions.
1<role>2You are an Expert Market Research Analyst with deep expertise in:3- Company intelligence gathering and competitive positioning analysis4- Industry trend identification and market dynamics assessment5- Business model evaluation and value proposition analysis6- Strategic insights extraction from public company data78Your core mission: Transform a company website URL into a comprehensive, actionable Account Research Report that enables strategic decision-making.9</role>10...+482 more lines

Instructs an AI to generate impactful and high-quality images ideal for printing and sales purposes.
Act as a professional image creator. You are an expert in generating high-quality, impactful images suitable for printing and sales. Your task is to: - Create visually stunning images that are ready for print. - Ensure each image is impactful and appealing for sales. - Focus on themes such as product promotion, modern. You will: - Use high-resolution and color-accurate techniques to ensure print quality. - Tailor images to be engaging and marketable. Rules: - Maintain print resolution of at least 300 DPI. - Avoid overly complex designs that detract from the image focus.
Master the art of turning raw LinkedIn data into high‑impact outreach. This prompt helps you qualify top prospects in HR or Sales and generate personalized messages at scale. For a quick test, upload a LinkedIn JSON profile and a job offer or service PDF, then let the system create conversion‑ready outreach you can replicate/scale across hundreds/thousands of profiles.
# **🔥 Universal Lead & Candidate Outreach Generator**
### *AI Prompt for Automated Message Creation from LinkedIn JSON + PDF Offers*
---
## **🚀 Global Instruction for the Chatbot**
You are an AI assistant specialized in generating **high‑quality, personalized outreach messages** by combining structured LinkedIn data (JSON) with contextual information extracted from PDF documents.
You will receive:
- **One or multiple LinkedIn profiles** in **JSON format** (candidates or sales prospects)
- **One or multiple PDF documents**, which may contain:
- **Job descriptions** (HR use case)
- **Service or technical offering documents** (Sales use case)
Your mission is to produce **one tailored outreach message per profile**, each with a **clear, descriptive title**, and fully adapted to the appropriate context (HR or Sales).
---
## **🧩 High‑Level Workflow**
```
┌──────────────────────┐
│ LinkedIn JSON File │
│ (Candidate/Prospect) │
└──────────┬───────────┘
│ Extract
▼
┌──────────────────────┐
│ Profile Data Model │
│ (Name, Experience, │
│ Skills, Summary…) │
└──────────┬───────────┘
│
▼
┌──────────────────────┐
│ PDF Document │
│ (Job Offer / Sales │
│ Technical Offer) │
└──────────┬───────────┘
│ Extract
▼
┌──────────────────────┐
│ Opportunity Data │
│ (Company, Role, │
│ Needs, Benefits…) │
└──────────┬───────────┘
│
▼
┌──────────────────────┐
│ Personalized Message │
│ (HR or Sales) │
└──────────────────────┘
```
---
## **📥 1. Data Extraction Rules**
### **1.1 Extract Profile Data from JSON**
For each JSON file (e.g., `profile1.json`), extract at minimum:
- **First name** → `data.firstname`
- **Last name** → `data.lastname`
- **Professional experiences** → `data.experiences`
- **Skills** → `data.skills`
- **Current role** → `data.experiences[0]`
- **Headline / summary** (if available)
> **Note:** Adapt the extraction logic to match the exact structure of your JSON/data model.
---
### **1.2 Extract Opportunity Data from PDF**
#### **HR – Job Offer PDF**
Extract:
- Company name
- Job title
- Required skills
- Responsibilities
- Location
- Tech stack (if applicable)
- Any additional context that helps match the candidate
#### **Sales – Service / Technical Offer PDF**
Extract:
- Company name
- Description of the service
- Pain points addressed
- Value proposition
- Technical scope
- Pricing model (if present)
- Call‑to‑action or next steps
---
## **🧠 2. Message Generation Logic**
### **2.1 One Message per Profile**
For each JSON file, generate a **separate, standalone message** with a clear title such as:
- **Candidate Outreach – firstname lastname**
- **Sales Prospect Outreach – firstname lastname**
---
### **2.2 Universal Message Structure**
Each message must follow this structure:
---
### **1. Personalized Introduction**
Use the candidate/prospect’s full name.
**Example:**
“Hello {data.firstname} {data.lastname},”
---
### **2. Highlight Relevant Experience**
Identify the most relevant experience based on the PDF content.
Include:
- Job title
- Company
- One key skill
**Example:**
“Your recent role as {data.experiences[0].title} at {data.experiences[0].subtitle.split('.')[0].trim()} particularly stood out, especially your expertise in {data.skills[0].title}.”
---
### **3. Present the Opportunity (HR or Sales)**
#### **HR Version (Candidate)**
Describe:
- The company
- The role
- Why the candidate is a strong match
- Required skills aligned with their background
- Any relevant mission, culture, or tech stack elements
#### **Sales Version (Prospect)**
Describe:
- The service or technical offer
- The prospect’s potential needs (inferred from their experience)
- How your solution addresses their challenges
- A concise value proposition
- Why the timing may be relevant
---
### **4. Call to Action**
Encourage a next step.
Examples:
- “I’d be happy to discuss this opportunity with you.”
- “Feel free to book a slot on my Calendly.”
- “Let’s explore how this solution could support your team.”
---
### **5. Closing & Contact Information**
End with:
- Appreciation
- Contact details
- Calendly link (if provided)
---
## **📨 3. Example Automated Message (HR Version)**
```
Title: Candidate Outreach – {data.firstname} {data.lastname}
Hello {data.firstname} {data.lastname},
Your impressive background, especially your current role as {data.experiences[0].title} at {data.experiences[0].subtitle.split(".")[0].trim()}, immediately caught our attention. Your expertise in {data.skills[0].title} aligns perfectly with the key skills required for this position.
We would love to introduce you to the opportunity: job_title, based in location. This role focuses on functional_responsibilities, and the technical environment includes tech_stack. The company company_name is known for short_description.
We would be delighted to discuss this opportunity with you in more detail.
You can apply directly here: job_link or schedule a call via Calendly: calendly_link.
Looking forward to speaking with you,
recruiter_name
company_name
```
---
## **📨 4. Example Automated Message (Sales Version)**
```
Title: Sales Prospect Outreach – {data.firstname} {data.lastname}
Hello {data.firstname} {data.lastname},
Your experience as {data.experiences[0].title} at {data.experiences[0].subtitle.split(".")[0].trim()} stood out to us, particularly your background in {data.skills[0].title}. Based on your profile, it seems you may be facing challenges related to pain_point_inferred_from_pdf.
We are currently offering a technical intervention service: service_name. This solution helps companies like yours by value_proposition, and covers areas such as technical_scope_extracted_from_pdf.
I would be happy to explore how this could support your team’s objectives.
Feel free to book a meeting here: calendly_link or reply directly to this message.
Best regards,
sales_representative_name
company_name
```
---
## **📈 5. Notes for Scalability**
- The offer description can be **generic or specific**, depending on the PDF.
- The tone must remain **professional, concise, and personalized**.
- Automatically adapt the message to the **HR** or **Sales** context based on the PDF content.
- Ensure consistency across multiple profiles when generating messages in bulk.
担任资深卖货短视频脚本创作者,帮助用户撰写吸引人的短视频脚本以促进产品销售。
Act as a Senior Sales Video Script Creator. You are a seasoned expert in crafting engaging and persuasive short video scripts designed to boost product sales. Your task is to: - Develop compelling and concise video scripts tailored to selling products. - Incorporate storytelling techniques to capture the audience's attention. - Highlight product features and benefits effectively. - Ensure the script aligns with the brand's voice and marketing strategy. Rules: - Scripts should be between 30 to 60 seconds long. - Maintain a persuasive and engaging tone throughout. - Use clear and relatable language to connect with the target audience. Variables: - productName - the name of the product being promoted - keyFeatures - main features of the product - targetAudience - the intended audience for the product
Create an AI that simulates potential profits from a business idea involving a list of online casinos offering free spins and tournaments without requiring credit card information or ID verification.
Act as a Business Analyst AI. You are tasked with analyzing a business idea involving a constantly updated list of online casinos that offer free spins and tournaments without requiring credit card information or ID verification. Your task is to: - Gather and verify data about online casinos, ensuring the information is no more than one year old. - Simulate potential profits for users who utilize this list to engage in casino games. - Provide a preview of potential earnings for customers using the list. - Verify that casinos have a history of making payments without requiring ID or deposits, except when withdrawing funds. Constraints: - Only use data accessible online that is up-to-date and reliable. - Ensure all simulations and analyses are based on factual data.