Job searching is painful. You open LinkedIn, set filters, and manually scan through hundreds of postings — only to find most were posted weeks ago and already have 500+ applicants. What if Claude could do all of that for you, scrape the results, and hand you a clean interactive dashboard in one shot?
This post walks through exactly that: using Claude's Apify connector to scrape LinkedIn jobs in real time and render them as a fully filterable, sortable HTML dashboard — with the complete prompt you can use right now.
What Is the Apify Connector in Claude?
Claude has a built-in Apify MCP (Model Context Protocol) connector that lets it call Apify actors directly — no coding required. Apify is a web scraping platform with thousands of pre-built actors (scrapers) for LinkedIn, Google, Amazon, and more.
When you give Claude access to the Apify connector, it can:
- Call any Apify actor with custom inputs
- Wait for the scrape to complete
- Receive the structured JSON results
- Process and render the data — as HTML, markdown, charts, or anything else
Setup required: Go to Claude.ai → Settings → Integrations → Apify and connect your Apify account. You'll need a free Apify account and an API token. The LinkedIn jobs actor costs approximately $0.50–$1.00 per 100 results on Apify's pay-per-result model.
The Full System Architecture
How Claude Prompting Works Here
The key insight is that Claude isn't just fetching data — it's acting as a workflow orchestrator. The prompt gives Claude three things:
- Configuration variables — role, level, location (parameterized so you can reuse the prompt)
- Tool invocation instructions — which Apify actor to call, what URL to pass, how many results to fetch
- Output specification — exactly what the HTML dashboard should look like, column by column
This is a pattern called structured tool-use prompting. You're not asking Claude to write code you then run — you're asking Claude to call a live tool and process real data in a single turn.
Understanding the LinkedIn Search URL Parameters
The Apify actor takes a LinkedIn jobs search URL. Understanding the URL parameters lets you customize precisely:
https://www.linkedin.com/jobs/search/?keywords=Software+Engineer
&f_E=2 ← seniority level filter
&f_TPR=r86400 ← time posted: last 24 hours (86400 seconds)
&f_JT=F ← job type: Full-time (optional)
&f_WT=2 ← work type: Remote (optional, 1=On-site, 2=Remote, 3=Hybrid)
&position=1
&pageNum=0
| Parameter | Value | Meaning |
|---|---|---|
f_E=1 | Internship | Internship level |
f_E=2 | Entry level | 0–2 years experience |
f_E=3 | Associate | Early career |
f_E=3%2C4 | Mid-Senior | Mid + Senior combined |
f_E=5 | Director | Director level |
f_TPR=r3600 | Last hour | Very fresh postings |
f_TPR=r86400 | Last 24h | Daily fresh postings |
f_TPR=r604800 | Last week | Weekly postings |
f_WT=2 | Remote | Remote jobs only |
The Complete Prompt
Copy this entire prompt into Claude (with the Apify connector enabled). Change the 3 config variables at the top:
-------------------------------------------------------------
CONFIGURE THESE BEFORE RUNNING:
ROLE = Software Engineer
LEVEL = Entry level
LOCATION = United States
-------------------------------------------------------------
Using the Apify connector, search LinkedIn for [ROLE] jobs posted
in the last 24 hours at the [LEVEL] seniority level in [LOCATION].
Use the actor: curious_coder/linkedin-jobs-scraper
Pass this search URL to the actor:
https://www.linkedin.com/jobs/search/?keywords=[ROLE]&f_E=2&f_TPR=r86400&position=1&pageNum=0
(Level filter reference:
Entry level = f_E=2
Early career = f_E=3
Mid-Senior = f_E=3%2C4
Senior = f_E=4
Internship = f_E=1)
Fetch at least 100 results.
Then present ALL results as a styled, interactive HTML file with
these exact columns:
# | Job Title (linked) | Company | Location | Type | Level | Salary | Applicants | Posted | Link
Requirements for the HTML output:
- Dark background (#0e0f31), monospace font (JetBrains Mono or Fira Code)
- Clean grid/table layout with subtle row hover highlight
- Color-coded employment type tags:
green = Full-time
amber = Contract
blue = Part-time
purple = Internship
- Flag any posting with 200+ applicants with a fire emoji
- Search bar at the top: filters rows live by title, company, or location
- Dropdown filters for: Employment Type, Seniority Level, Remote/On-site
- Clickable column headers to sort ascending/descending (show arrow indicator)
- Each job title links directly to the LinkedIn posting URL
- "View →" button in the last column also links to the posting
- Footer shows: total job count + "Fetched on [current date and time]"
- Export button: downloads the table as a CSV file
Do not summarize. Return the complete, self-contained HTML file
as your output. All CSS and JavaScript must be inline.
What Claude Actually Does Step by Step
What the JSON Output Looks Like
The Apify actor returns structured JSON per job. Here's what each object contains:
{
"id": "3876543210",
"title": "Software Engineer, Machine Learning",
"company": "Meta",
"location": "Menlo Park, CA",
"employmentType": "Full-time",
"seniorityLevel": "Entry level",
"salary": "$130,000/yr - $180,000/yr",
"applicantsCount": 347,
"postedAt": "2026-04-03T08:14:00Z",
"jobUrl": "https://www.linkedin.com/jobs/view/3876543210/",
"description": "We are looking for a Software Engineer..."
}
Claude receives an array of 100+ of these objects, then generates the entire dashboard HTML in one shot — no post-processing needed.
The Dashboard HTML Structure Claude Generates
The prompt instructs Claude to build a self-contained HTML file. Here's the skeleton of what it produces:
<!DOCTYPE html>
<html>
<head>
<style>
/* Dark theme */
body { background: #0e0f31; color: #e0e0e0; font-family: 'JetBrains Mono', monospace; }
.search-bar { width: 100%; padding: 1rem; background: #1a1a2e; border: 1px solid #333; color: #fff; }
.filters { display: flex; gap: 1rem; margin: 1rem 0; }
table { width: 100%; border-collapse: collapse; }
th { cursor: pointer; background: #1877F2; color: #fff; padding: 0.8rem; }
th:hover { background: #0f46a2; }
th.asc::after { content: ' ↑'; }
th.desc::after { content: ' ↓'; }
tr:hover td { background: rgba(255,255,255,0.05); }
.tag-full { background: #0d6e4e22; color: #00c2a0; border: 1px solid #00c2a0; }
.tag-contract{ background: #b4530922; color: #f39c12; border: 1px solid #f39c12; }
.tag-intern { background: #42017722; color: #a855f7; border: 1px solid #a855f7; }
.hot { font-size: 1.2rem; } /* 🔥 for 200+ applicants */
.view-btn { background: #1877F2; color: #fff; padding: 0.3rem 0.8rem; border-radius: 4px; }
</style>
</head>
<body>
<h1>Software Engineer Jobs — Entry Level — United States</h1>
<input class="search-bar" placeholder="Search by title, company, location..."
oninput="filterTable(this.value)">
<div class="filters">
<select onchange="filterType(this.value)">...</select>
<select onchange="filterLevel(this.value)">...</select>
<button onclick="exportCSV()">Export CSV</button>
</div>
<table id="jobsTable">
<thead>
<tr>
<th onclick="sortTable(0)">#</th>
<th onclick="sortTable(1)">Job Title</th>
<th onclick="sortTable(2)">Company</th>
<th onclick="sortTable(3)">Location</th>
<th onclick="sortTable(4)">Type</th>
<th onclick="sortTable(5)">Level</th>
<th onclick="sortTable(6)">Salary</th>
<th onclick="sortTable(7)">Applicants</th>
<th onclick="sortTable(8)">Posted</th>
<th>Link</th>
</tr>
</thead>
<tbody id="tableBody">
<!-- Claude populates 100+ rows here -->
</tbody>
</table>
<footer>Total: 127 jobs | Fetched on Apr 3, 2026 at 3:45 PM</footer>
<script>
// Sort, filter, and export logic — all inline
function sortTable(col) { ... }
function filterTable(query) { ... }
function exportCSV() { ... }
</script>
</body>
</html>
Prompt Variations for Different Use Cases
The base prompt is configurable. Here are the key variations:
| Use Case | Change This in the Prompt |
|---|---|
| Remote jobs only | Add &f_WT=2 to the search URL |
| Last week instead of 24h | Change f_TPR=r86400 to f_TPR=r604800 |
| Multiple roles | Run the prompt twice with different ROLE values |
| Specific city | Add &location=San+Francisco%2C+CA to the URL |
| Exclude certain companies | Add a post-processing instruction: "Remove rows where Company is [X]" |
| Sort by least applicants | Add: "Default sort by Applicants ascending (lowest competition first)" |
| Add AI match score | Add: "Add a 'Match %' column — score each job 0-100 based on [your skills]" |
Advanced: Ask Claude to Score Jobs Against Your Resume
This is where it gets powerful. After getting the job data, you can ask Claude to rank them:
After fetching the jobs, add a "Match %" column to the table.
Score each job 0-100 based on how well it matches this candidate profile:
Skills: Python, PyTorch, SQL, AWS, distributed systems, React
Experience: 2 years data science, 1 year ML engineering
Education: MS Computer Science (Virginia Tech)
Preferences: Remote preferred, min $120k, ML/AI roles weighted higher
Color-code the Match % column:
90-100% → green background
70-89% → yellow background
below 70% → no highlight
Sort the table by Match % descending by default.
Claude will score each job based on the description text returned by the scraper and color-code the dashboard accordingly — effectively acting as a recruiter filtering the list for you.
Why This Approach Works
What makes this pattern powerful isn't the scraping — it's the prompt chain:
- Tool use — Claude calls Apify, which calls LinkedIn. No browser automation, no Selenium, no API key negotiations.
- Structured output — the JSON → HTML transformation is specified precisely in the prompt. Claude doesn't decide the format; you do.
- Single-shot execution — one prompt, one response, one usable artifact. Compare this to writing a Python scraper + pandas + a Plotly dashboard from scratch.
- Composability — you can chain follow-up prompts: "Now filter to only remote jobs and re-render" or "Add a column for Glassdoor rating".
Cost reality check: The curious_coder/linkedin-jobs-scraper actor charges approximately $0.008 per result on Apify. 100 results = ~$0.80. Claude API calls are additional if you're using the API directly. For job searching, this is a bargain compared to LinkedIn Premium ($40+/month).
Limitations to Know
- LinkedIn rate limiting — if you run this too frequently (multiple times per hour), LinkedIn may temporarily block Apify's IPs. Once per day is safe.
- Salary data is sparse — LinkedIn only shows salary when the employer adds it. Many rows will show "Not listed". This is a data limitation, not a scraper bug.
- Applicant count accuracy — LinkedIn shows "200+ applicants" as a cap, not exact counts. The scraper faithfully returns what LinkedIn shows.
- Claude output length — 100 jobs × ~5 columns of text can approach Claude's output limit. If truncated, ask for 50 results or split into two batches.
- Actor reliability — third-party Apify actors can break when LinkedIn changes its HTML. Check the actor's last update date on Apify before relying on it.
Full Workflow Summary
Takeaways
- Claude's Apify connector turns natural language into live web data — no code required.
- The key to getting a good dashboard is specifying the output format precisely in the prompt — column names, color codes, sort behavior, all of it.
- Parameterize your prompts (ROLE, LEVEL, LOCATION as variables) so you can reuse them daily without editing.
- The match-scoring extension turns a data dump into a prioritized shortlist — far more useful than scrolling LinkedIn manually.
- This same pattern works for any Apify actor: Google job listings, Indeed, Glassdoor, Amazon product searches, news articles — the architecture is identical.