Most LinkedIn data extraction guides end the same way: “use a tool that has an API, paste your spreadsheet, wait for the credits to drain.” That mental model assumes you’re working in a dashboard or a sheet. In 2026, more and more prospecting work happens inside an AI assistant, Claude Desktop, ChatGPT, Cursor, and the standard plumbing for letting an AI call external tools is the Model Context Protocol (MCP).
A LinkedIn scraper MCP is the bridge: the AI assistant becomes the operator, and a piece of LinkedIn data extraction becomes a single sentence in chat. “Pull the headline and current company for these 20 LinkedIn URLs.” “Find the head of sales at these 30 companies and return their LinkedIn URLs.” No sidebar, no copy-paste, no separate dashboard.
This guide walks through what a LinkedIn scraper MCP actually is, the three flavors you will see in 2026 (browser-automation, API-based, and native enrichment), how to set one up with Claude Desktop, what LinkedIn’s terms of service say about each approach, and a realistic SDR workflow at the end.
Chapter 1: What is an MCP, in one paragraph
The Model Context Protocol is an open standard published in late 2024 that defines a structured way for an AI assistant to call external tools. Think of it as the USB-C port of AI tooling: instead of every AI assistant inventing its own tool-calling syntax, MCP gives them a shared format. An MCP server exposes a list of “tools” (functions the AI can call), an MCP client (Claude Desktop, ChatGPT, Cursor, Cline, Windsurf) reads that list and lets the model invoke them on demand.
A LinkedIn scraper MCP is just an MCP server whose tools happen to extract LinkedIn data. It is not a new technology stack. It is the same scraping or enrichment logic you would expose as a Python script or a REST API, packaged so an AI assistant can call it without you writing a wrapper.
For the broader picture on LinkedIn extraction methods (scrapers, APIs, native enrichment), the 2026 LinkedIn data extraction guide covers the four families and how to pick between them. This article zooms in on the MCP variant.
Chapter 2: The three flavors of LinkedIn scraper MCP in 2026
Not all MCP servers labeled “linkedin scraper” do the same thing. The label hides three very different technical approaches, with very different risk and quality profiles.
Flavor 1: Browser-automation MCPs
These wrap a headless browser (Playwright, Patchright, Selenium) that logs into LinkedIn under a real account and scrapes profile pages on demand. The MCP server starts the browser, navigates to a profile URL, and returns the parsed fields.
Examples: stickerdaniel/linkedin-mcp-server, linkedin-scraper-mcp on PyPI (released April 2026), several community-built Apify and Lobehub variants.
Strengths:
- Can read anything the user can see on a profile page
- Works with Sales Navigator views
- No third-party data dependency
Weaknesses:
- Requires the user to expose their LinkedIn cookie or session to the MCP server. That cookie equals account access.
- LinkedIn rate-limits aggressively. A standard account is flagged after roughly 80-100 profile views per day.
- LinkedIn cracked down on Playwright-based scraping in late 2025; servers had to migrate to Patchright (a Playwright fork built to evade bot detection). The cat-and-mouse continues quarter to quarter.
- LinkedIn’s User Agreement, Section 8.2, prohibits “automated access” to the service. Browser-automation MCPs sit squarely in that prohibition.
Who they fit: developers who already accept the tradeoffs of running a personal LinkedIn scraper and want to move that workflow into Claude Desktop instead of a Python REPL.
Flavor 2: API-based MCPs
These call LinkedIn’s official API endpoints (Marketing Developer Platform, Sales Navigator API, Talent Solutions). The MCP server is just a thin wrapper around documented partner endpoints.
Strengths:
- Fully compliant with LinkedIn’s terms.
- No account-flag risk.
- Stable rate limits, documented behavior.
Weaknesses:
- Requires LinkedIn partner approval. The application process takes weeks and rejection is common for ambiguous use cases.
- Profile data for cold prospecting is not on the menu. The official API exposes ad-tech data, ATS integrations, and limited Sales Navigator fields, but not the wide-open profile fetch that prospecting workflows expect.
- Even Sales Navigator API access is gated behind enterprise contracts.
Who they fit: companies building integrations that LinkedIn would happily endorse (an ATS, an ad-tech platform, a CRM bidirectional sync). Not a fit for an SDR who needs to enrich a list of cold prospects.
Flavor 3: Native enrichment MCPs
These do not log into LinkedIn at all. The MCP server takes a LinkedIn URL, name + company, or email as input, then runs a waterfall of public sources, opt-in databases, and verification providers held by the data vendor. The data is “LinkedIn-style” but sourced from outside LinkedIn.
Examples: Derrick MCP at https://app1.derrick-app.com/mcp, and a few smaller vendor-owned MCP endpoints that have appeared in 2026.
Strengths:
- No LinkedIn cookie handed to a third party.
- No account-flag risk.
- Deterministic credit cost; failed lookups generally do not burn credits where the source supports it.
- Same data backbone as the vendor’s API, Chrome extension, and Sheets sidebar (so the AI assistant gets the same enrichment quality the team already validated).
Weaknesses:
- Coverage depends on the vendor’s data partnerships. US enterprise data is usually deep; mid-market European data is sometimes patchier.
- Some niche fields (LinkedIn group membership, specific Sales Navigator filters) are not always covered.
Who they fit: SDRs, recruiters, growth marketers, and founders who want LinkedIn data extraction inside Claude Desktop or ChatGPT without standing up a scraper or applying for partner status. The lowest-friction path of the three.
Chapter 3: Setting up a LinkedIn scraper MCP with Claude Desktop
The setup pattern is the same across the three flavors: edit claude_desktop_config.json, add an entry under mcpServers, restart Claude Desktop. Where they differ is the binary you point to and the credentials you pass.
The shared pattern
On macOS, the file lives at ~/Library/Application Support/Claude/claude_desktop_config.json. On Windows, at %APPDATA%\Claude\claude_desktop_config.json. If it does not exist yet, create it.
The file looks like this:
{
"mcpServers": {
"linkedin": {
"command": "node",
"args": ["/path/to/server.js"],
"env": {
"API_KEY": "your-key-here"
}
}
}
}
After editing, fully quit Claude Desktop and reopen it. The MCP server will appear in the bottom-right of the chat input as a tools indicator. If it does not, check the logs at ~/Library/Logs/Claude/mcp-server-linkedin.log.
Setup for a browser-automation MCP
The npm-published or pip-published browser-automation servers usually want a Chrome user-data directory and a LinkedIn session cookie:
{
"mcpServers": {
"linkedin-scraper": {
"command": "npx",
"args": ["-y", "linkedin-mcp-server@latest"],
"env": {
"LINKEDIN_COOKIE": "li_at=AQED...",
"USER_DATA_DIR": "/Users/you/Library/Application Support/Patchright/Default"
}
}
}
}
You will need to extract the li_at cookie from your browser dev tools manually. Treat that string like a password; whoever holds it can act as you on LinkedIn.
Setup for an API-based MCP
These ask for OAuth credentials issued by LinkedIn after partner approval:
{
"mcpServers": {
"linkedin-api": {
"command": "node",
"args": ["/usr/local/bin/linkedin-api-mcp.js"],
"env": {
"LINKEDIN_CLIENT_ID": "...",
"LINKEDIN_CLIENT_SECRET": "...",
"LINKEDIN_ACCESS_TOKEN": "..."
}
}
}
}
The application flow runs through https://www.linkedin.com/developers/. Plan for a 2-6 week wait.
Setup for a native enrichment MCP (Derrick example)
A vendor-hosted MCP exposes an HTTPS endpoint and an API key. No local binary, no browser cookie.
{
"mcpServers": {
"derrick": {
"type": "http",
"url": "https://app1.derrick-app.com/mcp",
"headers": {
"Authorization": "Bearer YOUR_DERRICK_API_KEY"
}
}
}
}
The API key is generated from the Derrick dashboard once on the Standard plan ($20/mo minimum). The MCP exposes the same enrichment tools the Sheets sidebar uses: profile enrichment, company enrichment, email finder, phone finder, LinkedIn URL search, and a handful of others. After the restart, asking Claude “find the LinkedIn URL for John Doe at Acme Corp” routes through the MCP automatically.
Reference →The 2026 LinkedIn data extraction guide
Four methods, side-by-side: scrapers, APIs, native enrichment, MCP. Pick the right one for your team.
Chapter 4: What tools you actually get
Across the three flavors, the typical LinkedIn scraper MCP exposes a similar tool surface. Names vary; the underlying capability is consistent.
| Tool | What it does | Browser-automation | API-based | Native enrichment |
|---|---|---|---|---|
get_profile |
Fetch profile fields from a LinkedIn URL | Yes | Limited | Yes |
get_company |
Fetch company fields from a LinkedIn company URL | Yes | Yes | Yes |
find_profile_by_name |
Resolve name + company to a LinkedIn URL | Yes | Limited | Yes |
find_email |
Best-effort email lookup from a LinkedIn URL | No | No | Yes |
find_phone |
Best-effort phone lookup from a LinkedIn URL | No | No | Yes |
search_companies |
Find LinkedIn company URLs by name | Yes | Yes | Yes |
list_employees |
Fetch employees of a company | Yes (Sales Nav) | No | Partial |
get_jobs |
Fetch open job postings | Yes | Yes | Partial |
A real prospecting workflow stitches several of these together. That is what makes the AI assistant useful: it composes tool calls instead of you opening seven tabs.
Chapter 5: A real SDR workflow inside Claude Desktop
Here is the pattern that justifies the setup work in the first place. The SDR is preparing for a Monday morning sequence. They have a list of 30 target companies in a Google Sheet and need decision-maker LinkedIn URLs, headlines, and emails before Tuesday.
In Claude Desktop, with a native enrichment MCP connected, the conversation looks like this:
User: I have 30 companies in this sheet [pastes the list]. Find the head of sales at each one. Return LinkedIn URL, headline, current company, and email if available.
Claude calls search_companies for each company name to confirm the LinkedIn company URL. Then find_profile_by_name with role variants (“Head of Sales”, “VP Sales”, “Sales Director”, “Chief Revenue Officer”). Then find_email on each profile URL that returned a hit. Result: a markdown table in chat with 25-30 rows, ready to paste into the Sheet.
Time elapsed: about 4 minutes for 30 companies. Same workflow done manually, opening LinkedIn search tabs, takes 90+ minutes.
The credit cost on a native enrichment MCP for this run: roughly 30 LinkedIn URL lookups (1 credit each) + 30 profile enrichments (1 credit each) + 25 email lookups (5 credits per found, so ~75-100 credits if hit rate is 60-70%). Total: ~165-180 credits. The Standard plan ($20/mo) ships with 10,000 credits, so a workflow like this consumes roughly 2% of the monthly budget.
The same workflow on a browser-automation MCP requires the SDR to keep their LinkedIn account in good standing, paginate carefully to avoid rate limits, and monitor for “we noticed unusual activity” warnings. The same workflow on an API-based MCP is not possible at all because cold profile lookup is not part of any LinkedIn partner program.
For a deeper take on Sales Navigator workflows specifically, the Sales Navigator free guide covers how to combine free-tier search with enrichment.
Chapter 6: The legal and operational picture
Three points worth reading carefully before you ship a LinkedIn scraper MCP into a production team workflow.
LinkedIn ToS Section 8.2
LinkedIn’s User Agreement explicitly prohibits “scraping, copying, or automated access” to the service. The clause has been there since 2014 and is the basis for the hiQ Labs v. LinkedIn line of cases (Ninth Circuit, 2017-2022). Browser-automation MCPs violate this clause regardless of how careful the rate limiting is.
API-based MCPs are exempt because they call documented partner endpoints. Native enrichment MCPs sit outside the LinkedIn ToS conversation entirely because they do not access LinkedIn directly; they query independent data sources that aggregate B2B data from public sources, opt-in databases, and partnerships.
GDPR and data lawful basis
In the EU, even compliant scraping is not the end of the legal question. Processing personal data requires a lawful basis (legitimate interest, contract, consent). For B2B prospecting, “legitimate interest” is usually defensible if you have a clear opt-out mechanism, a retention policy, and a proportionality argument. For consumer-facing flows, the bar is much higher. Get a privacy lawyer to review before going to volume.
For a deeper take on the legal layer, the GDPR data enrichment guide walks through the lawful basis question for B2B teams.
Account safety in practice
If you decide to run a browser-automation MCP, three habits significantly reduce the chance of an account flag:
- Use a burner LinkedIn account, not your personal one.
- Cap the MCP server to 50 profile fetches per day.
- Add a 30-60 second random delay between fetches.
These mitigations slow the workflow enough that, for most teams, the native enrichment route ends up being more cost-effective even before counting the legal risk.
Chapter 7: How to pick
A short decision tree that maps the three flavors to common team profiles.
Pick browser-automation MCP if: you have engineering capacity, you accept the LinkedIn ToS risk, you have a burner account you can replace cheaply, and your data needs include fields no API or vendor exposes (Sales Navigator full search results, group memberships, specific niche signals).
Pick API-based MCP if: you are building an integration LinkedIn would endorse (ATS, ad-tech, CRM partner sync). Plan for the partner application process. Do not pick this if your goal is cold prospect enrichment.
Pick native enrichment MCP if: you want LinkedIn data extraction inside Claude Desktop or ChatGPT today, with no setup beyond pasting an API key, no LinkedIn account flag risk, and predictable credit-based pricing. This covers the SDR, recruiter, growth marketer, and founder use cases.
For most teams in 2026, the native enrichment path is the one that ships in 10 minutes and stays running. The browser-automation path is the one that keeps engineers busy. The API-based path is the one that shows up in board meetings and rarely in production prospecting.
À retenir
- A LinkedIn scraper MCP is an MCP server whose tools extract LinkedIn data; the AI assistant becomes the operator.
- Three flavors in 2026: browser-automation (flexible, ToS-violating, account-risk), API-based (compliant but narrow), native enrichment (lowest friction, vendor-sourced).
- Setup is a JSON entry in
claude_desktop_config.jsonand a restart. - The native enrichment flavor is the lowest-friction path for SDRs, recruiters, and growth teams.
- LinkedIn ToS Section 8.2 still applies to browser-automation; GDPR adds a layer on top in the EU.
FAQ
What is the difference between a LinkedIn scraper and a LinkedIn MCP?
A scraper is the underlying logic that extracts data. An MCP is the protocol that lets an AI assistant call that logic as a tool. You can have a scraper without an MCP (a Python script you run by hand). You can also have an MCP without a scraper (a native enrichment MCP that calls third-party APIs instead of scraping). The two are independent.
Do I need a LinkedIn account to use a LinkedIn MCP?
For browser-automation MCPs, yes; you need a working LinkedIn account and you have to expose the session cookie to the server. For API-based MCPs, you need partner credentials. For native enrichment MCPs, you do not strictly need to log in for the MCP itself to work, but the workflow usually starts from LinkedIn URLs the user collected by browsing LinkedIn, which still requires a LinkedIn account in practice. Be careful with vendors that claim “no LinkedIn login required” because that is rarely literally true.
Is using a LinkedIn scraper MCP legal?
It depends on the flavor. Browser-automation MCPs violate LinkedIn’s User Agreement. API-based MCPs are compliant if you are an approved partner. Native enrichment MCPs sit outside the LinkedIn ToS conversation but still have to comply with GDPR, CCPA, and other data protection regimes. Get a privacy lawyer to review before running at volume.
How much does a LinkedIn MCP cost?
Open-source browser-automation MCPs are free in software but cost the price of a LinkedIn account if it gets banned, and engineering time to maintain. API-based MCPs require an enterprise LinkedIn partner contract. Native enrichment MCPs price per credit; entry tiers run from $9-$20 per month for 4,000-10,000 credits, which covers thousands of profile lookups.
Will my LinkedIn account get banned if I use a scraper MCP?
If you use a browser-automation MCP at sustained volume (more than 80-100 profile views per day from one account), yes, the risk is real. If you stay under that ceiling and use random delays, the practical risk is closer to manual browsing. API-based and native enrichment MCPs do not touch your LinkedIn account.
Can I use multiple LinkedIn MCPs at once in Claude Desktop?
Yes. Claude Desktop loads all MCP servers listed in the config and exposes their tools to the model. You can combine, for example, a native enrichment MCP for the daily prospecting workflow and a separate research MCP for academic-style profile reads. The model picks the right tool by name.
What is the best LinkedIn MCP server for 2026?
Depends on the use case. For compliant prospecting workflows inside Claude Desktop or ChatGPT with no setup, native enrichment MCPs (Derrick MCP, others) are the lowest-friction. For developers who want to read everything visible on a profile and accept the tradeoffs, the open-source browser-automation servers (stickerdaniel/linkedin-mcp-server, linkedin-scraper-mcp on PyPI) are the most actively maintained as of mid-2026. There is no LinkedIn-published official MCP server.