The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (2024)

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (1)

/ proxycurl

Colton Randolph

Hi! I'm Colton, a technical writer here at Proxycurl -- I love showing you how to combine B2B data with actionable use cases that move the needle.

Share:

  • The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (4)
  • The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (5)

Subscribe to our newsletter

Get the latest news from Proxycurl

LinkedIn X-ray searches are a unique method that allows for much more freedom in terms of querying LinkedIn's B2B database for interesting companies, jobs, and individuals.

Technically, there are two similar but functionally different methods referred to as "LinkedIn X-ray searches"--both being aimed at improving the process of searching for and extracting data from LinkedIn search results.

The first method uses Boolean operators (such as NOT, AND, OR) within the LinkedIn search engine to improve search results.

For example, let's say you wanted to find and hire a content marketer or content strategist, but you wanted to make sure they were also writers themselves, as well as familiar with SEO standards.

Rather than sifting through thousands of profiles on LinkedIn to find the right prospect, you could use the following search query to instantly find them, "content marketer" OR "content strategist" AND "writer" AND "SEO":

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (6)

Quite a bit narrower than the original 385,00 results for "content marketer":

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (7)

The second method for performing LinkedIn X-ray searches is googling, but using operators to refine your results. The main difference is that we're using Google's search engine to find indexed LinkedIn pages instead of LinkedIn's search engine.

We also don't need to be logged into a LinkedIn account to do the second method (hence the name "LinkedIn X-ray search"), so there's no risk of getting banned. Most people prefer this method due to its scalability.

To give you an example, let's take our earlier example and take a look at the Google search results for LinkedIn content writer and SEO:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (8)

Wow, that's a lot. Let's use a bit of magic and confine our search results to only LinkedIn.com:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (9)

Better. Now let's refine that search query further with operators: site:linkedin.com/in ("content marketer" OR "content strategist") AND "writer" AND "SEO":

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (10)

Much better!

That said, in this article, I'll be explaining how to use both methods to your advantage.

First, though, we need to talk a bit more about Boolean operators:

A Boolean operator is a word or symbol used in logic and search queries to combine or exclude keywords, resulting in more focused and specific search results. It's basically like searching on steroids.

Both LinkedIn and Google's search engines support the following Boolean operators:

OperatorFunctionExample Usage
ANDResults must include all specified terms"developer AND manager"
ORResults may include any of the specified terms"sales OR marketing"
NOTExcludes results containing the specified term (Google uses "-" for NOT)"engineer NOT civil"

We're about to get into further examples of utilizing search operators on LinkedIn and performing LinkedIn X-ray searches on Google, but before we do, I just wanted to give you a heads-up that for the examples throughout this article, we'll assume we're working in some type of HR role and need to find a given individual to fill a job role.

However, don't freak. If that doesn't apply to you, for example, if you're in sales or marketing, you can still use these same methods to find prospects for your use case as well. You'll just slightly modify your search query.

Okay, now let's put this into practice:

On top of just Boolean operators, LinkedIn also supports the following other search operators:

OperatorFunctionExample
" "Search for an exact phrase"product manager"
( )Group terms in complex queries(senior OR junior) AND engineer
first:Search by first namefirst:John
last:Search by last namelast:Doe

These can further help refine our search results.

First up, the "AND" operator:

"AND" operators

So, let's say we need to find a software engineer who lives in Los Angeles and has experience with both Javascript and Linux.

Here's how we could use Boolean operators to help us here, "software engineer" AND "los angeles" AND "javascript" AND "linux":

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (11)

That returns 374 results for profiles that match all of our search criteria. Here's the first profile:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (12)
The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (13)

"NOT" operators

Let's say for whatever reason, the company we're recruiting for doesn't hire programmers that use Kotlin.

The profile returned above also has Kotlin, the programming language, listed as a skill:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (14)

We can refine our search accordingly, "software engineer" AND "los angeles"AND "javascript" AND "linux" NOT "kotlin":

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (15)

You'll see our search just slightly went down in the number of results returned. Our friend from earlier is also now no longer returned.

Filtering with location and putting everything together

We can take this one step further and assume everything above is true, but the company we're recruiting for has offices in both Los Angeles and New York City, so they could work in either and come into either respective office.

We can expand our original search by alternating it slightly, "software engineer" AND ("los angeles" OR "new york city") AND "javascript" AND "linux" NOT "Kotlin":

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (16)

The amount nearly doubles by including New York City as an option. All profiles returned still fit our earlier search. Nice!

It may sound silly, but the main con of this way of searching for prospects is the fact you have to provide your own LinkedIn account and be actively logged into it. You can't use LinkedIn's search engine otherwise.

You'll also inevitably run into account limits and bans by doing this in bulk, and you can't access more than 1,000 results at any given time on a free LinkedIn account, even if it tells you more results were returned.

It's simply not a scalable method, and you can't automate it for the most part.

This is why LinkedIn operators, while they help improve search results, aren't quite as powerful as utilizing Google to perform a true LinkedIn X-ray search without the same limitations as on LinkedIn.

That said, let me show you how to do this in a bit of a better way by using Google:

On top of AND, OR, and NOT, LinkedIn, Google also supports additional search operators:

OperatorFunctionExample
" "Searches for an exact phrase"climate change"
-As mentioned above, Google uses "-" for NOTjaguar -car
site:Limits the search to a specific website or domainsite:nytimes.com
related:Finds websites similar to a specified siterelated:time.com
filetype:Searches for a specific file typefiletype:pdf "renewable energy"
intitle:Finds pages with a specific term in the titleintitle:conservation
inurl:Searches for a specific term within the URLinurl:nutrition
intext:Searches for a specific term within the text of a pageintext:"global warming"
*Acts as a wildcardworld * champion
AROUND(X)Finds terms that are within a certain number of words apartsolar AROUND(3) energy
cache:Shows the most recent cached version of a web pagecache:google.com

We can use these operators to our advantage.

site:linkedin.com/in, will be our starting point and is what allows us to search on Google exclusively for LinkedIn results.

(Note: Remember the part from earlier where I mentioned you can also do this for companies and jobs? The only different is that companies have a URL structure with /company instead, for example: https://www.linkedin.com/company/microsoft/, and jobs have a URL structure of /jobs, such as: https://www.linkedin.com/jobs/search/?currentJobId=3825567794.)

LinkedIn X-ray searches by job role

So, let's say we want to find a full-stack developer. Here's how we could do just that by using a LinkedIn X-ray search, site:linkedin.com/in/ intitle:"full-stack developer":

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (17)

11,100 results returned. All consisting of LinkedIn profiles, having "full-stack developer" in their title:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (18)

LinkedIn X-ray searches by job role and skill

Now, let's narrow that search down a bit and say we need them to be familiar with both Django and React, site:linkedin.com/in/ intitle:"full-stack developer" (django OR react):

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (19)

6,560 results returned. With profiles such as this:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (20)

Excluding skills, roles, or anything from your LinkedIn X-ray search

Next, let's say we want the same as above, but we want to exclude PHP programmers.

Here's how we could do that, site:linkedin.com/in/ intitle:"full-stack developer" intext:django OR intext:react -php:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (21)

Quite a bit fewer results, none of the profiles containing PHP within them as a skill:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (22)

I think you're probably just about getting the point, but I'll show you another trick:

Location-based filtering with LinkedIn X-ray searches

One way of doing this is by simply adding the location into the search such as this, site:linkedin.com/in/ intitle:"full-stack developer" "new york city" (django OR react) -php:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (23)

Obviously narrowing the search results down, matching profiles like this:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (24)
The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (25)

170 results. Full of LinkedIn profiles like this:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (26)

Not bad! You can get even more sophisticated with operators for an even more refined search results returned too.

Now let's talk a bit more about extracting this data from LinkedIn:

Ideally, we want to automate and systemize as much as possible. Especially if you're doing this at scale image for recruiting.

So, to help accomplish this, we can use a tool like Value Serp, which is a search engine result page API (ELI5: an API is like a fast food menu for data--it makes it easy to send/receive data between applications):

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (27)

Its sole job is to scrape search result pages at scale and conveniently return the data. This will help us avoid any future issues like running into CAPTCHAs or just blatantly being blocked (you'll have to source proxies otherwise).

Value Serp starts out at $2.50 to scrape 1,000 Google results, so not bad, but there are also other similar services out there you can use, so if you find an equivalent service for less, it's one of those things that it either works or doesn't, so it should do just as good of a job.

Anyway, using Value Serp's API, we can extract the results of Google search queries at scale:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (28)

And we can conveniently export these results to a .CSV:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (29)

Here's our earlier Germany example, site:de.linkedin.com/in/ intitle:"full-stack developer" "frankfurt" (django OR react) -php exported:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (30)

You could do this to thousands of profiles at once, all fitting different LinkedIn X-ray search queries or recruiting prospects.

This is where LinkedIn X-ray searches truly shine: the programmatic, and scalable nature of it in comparison to traditional LinkedIn prospecting methods.

Continuing on:

You now have a way to search for qualified prospects that meet specified criteria. You know how to search for LinkedIn profile URLs at scale and export the results without limitations.

But a LinkedIn profile URL alone doesn't do us much good.

Sure, you could click that URL and then manually send them a connection and try to message them, however, ideally, I would want their email and phone number for better touchpoints. All 3 if possible for multiple touchpoints.

So, what's an easy way to do that?

Well, no other than Proxycurl of course; a B2B data provider and API.

Let me explain more:

Proxycurl's Person Profile Endpoint

One of the key endpoints of Proxycurl is the Person Profile Endpoint, which allows you to extract quite a bit of data from public LinkedIn profiles, such as education, employment, skills, and everything you would find on a LinkedIn profile.

But it doesn't just extract LinkedIn information. The B2B data provided by our API is also enriched with other data sources, which is our secret sauce.

Anyway, by using our Person Profile Endpoint, you can take your newly exported list of LinkedIn profiles and enrich them, improving both the quantity and quality of data you have on any given prospect. And we can do it in a nearly automatic fashion that requires almost no effort or human intervention.

Using Proxycurl's API for enrichment

In the above Value Serp API example I used their built-in API sandbox feature, however, for the following Proxycurl example I'll be using a bit of Python, one of the easiest programming languages there is to use, to accomplish our LinkedIn profile enrichment.

(For the untechnical/non-programmers, there are many different ways to request data from an API. I chose Python because I'm familiar with it. You can easily use a free Python IDE like PyCharm on any device.)

That said, using a bit of Python, here's how we could pull quite a bit of information from any given LinkedIn profile:

import requestsapi_key = 'Your_API_Key_Here'headers = {'Authorization': 'Bearer ' + api_key}api_endpoint = 'https://nubela.co/proxycurl/api/v2/linkedin'params = { 'linkedin_profile_url': 'https://www.linkedin.com/in/russellbrunson/', 'extra': 'include', 'github_profile_id': 'include', 'facebook_profile_id': 'include', 'twitter_profile_id': 'include', 'personal_contact_number': 'include', 'personal_email': 'include', 'inferred_salary': 'include', 'skills': 'include', 'use_cache': 'if-present', 'fallback_to_cache': 'on-error',}try: response = requests.get(api_endpoint, params=params, headers=headers) response.raise_for_status() # Raise an exception for HTTP errors (e.g., 404, 500) # Print the response content and status code print("Response Content:") print(response.text) print("\nResponse Status Code:", response.status_code)except requests.exceptions.RequestException as e: print("An error occurred during the HTTP request:", e)except Exception as ex: print("An unexpected error occurred:", ex)

In this example, it would enrich the linkedin_profile_url of https://www.linkedin.com/in/russellbrunson/ (random example, he's a co-founder of the company ClickFunnels), immediately printing the results:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (31)

(Note: You could try this for yourself by creating an account here, we give you 15 free credits for testing a few things out upon account creation.)

The JSON result returned at the bottom looks like this:

"public_identifier": "russellbrunson", "profile_pic_url": "normal_url.com", "background_cover_image_url": null, "first_name": "Russell", "last_name": "Brunson", "full_name": "Russell Brunson", "follower_count": 81644, "occupation": "Owner at ClickFunnels", "headline": "New York Times Bestselling Author, Co-Founder of ClickFunnels", "summary": "Over the past 14 years, Russell has built a following of over 2 million entrepreneurs, sold over 450,000 copies of his books, popularized the concept of sales funnels, and co-founded ClickFunnels, a software company that helps 90,000 entrepreneurs quickly get their message out to the marketplace. \n\nRussell has been featured on major publications and websites such as Forbes, Entrepreneur Magazine, and The Huffington Post. He is also the host of the #1 rated business podcast, Marketing Secrets. In 2018, he was awarded \u2018Entrepreneur of the Year\u2019 in the Utah region by ey.com. Russell also regularly works with non-profits like Operation Underground Railroad and Village Impact.", "country": "US", "country_full_name": "United States of America", "city": null, "state": null, "experiences"...(continues on with more data)...

You can see all of the different data points that can be returned by the Person Profile Endpoint here.

So, of course, you could individually put any given LinkedIn profile in your new Python script to enrich, but you could also automate the entire process, using a familiar file format such as a .CSV to store the data, such as your LinkedIn profile URLs.

How to easily enrich LinkedIn profiles at scale

In fact, here's a script that does just that, reading an original .CSV named input_linkedin_profiles.csv, full of just LinkedIn URLs, enriches them, and then returns the results into a new .CSV named enriched_data.csv:

import jsonimport requestsimport csv# Your API keyAPI_KEY = 'Your_API_Key_Here'# API endpoint for Person Profileapi_endpoint = 'https://nubela.co/proxycurl/api/v2/linkedin'# Headers for the API requestheaders = {'Authorization': 'Bearer ' + API_KEY}# Input and output CSV file namesinput_file = 'input_linkedin_profiles.csv'output_file = 'enriched_data.csv'# Function to extract experiences as a stringdef extract_experiences(profile): experiences = profile.get('experiences', []) return '; '.join([f"{exp.get('title', '')} at {exp.get('company', '')}" for exp in experiences])# Read LinkedIn profile URLs from the input CSV filewith open(input_file, 'r') as csvfile: reader = csv.reader(csvfile) linkedin_urls = [row[0] for row in reader]# Open the output CSV file for writingwith open(output_file, 'w', newline='') as csvfile: fieldnames = [ 'linkedin_url', 'full_name', 'profile_picture', 'current_occupation', 'country', 'city', 'state', 'experiences', 'skills', 'personal_emails', 'inferred_salary' ] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() # Loop through each LinkedIn profile URL for linkedin_url in linkedin_urls: params = { 'linkedin_profile_url': linkedin_url, 'extra': 'include', 'github_profile_id': 'include', 'facebook_profile_id': 'include', 'twitter_profile_id': 'include', 'personal_contact_number': 'include', 'personal_email': 'include', 'inferred_salary': 'include', 'skills': 'include', 'use_cache': 'if-recent', 'fallback_to_cache': 'on-error', } response = requests.get(api_endpoint, params=params, headers=headers) if response.status_code == 200: profile = response.json() # Extracting skills and experiences skills = ", ".join(profile.get('skills', [])) experiences_str = extract_experiences(profile) writer.writerow({ 'linkedin_url': linkedin_url, 'full_name': profile.get('full_name', ''), 'profile_picture': profile.get('profile_pic_url', ''), 'current_occupation': profile.get('occupation', ''), 'country': profile.get('country_full_name', ''), 'city': profile.get('city', ''), 'state': profile.get('state', ''), 'experiences': experiences_str, 'skills': skills, 'personal_emails': "; ".join(profile.get('personal_emails', [])), 'inferred_salary': profile.get('inferred_salary', {}).get('min', '') # Example of handling nested data }) else: print(f"Failed to fetch data for {linkedin_url}, Status Code: {response.status_code}")print(f"Data exported to {output_file}")
The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (32)

There's no limit to this, so you can automate the enrichment of thousands and thousands of prospects/LinkedIn profile URLs directly generated from a tool like Value Serp. You could also integrate our API directly with Value Serp's API and skip the .CSV with a little bit more effort.

But there's also another way of doing this as well...

Proxycurl's Person Search Endpoint

Proxycurl's Person Search Endpoint is another useful endpoint, especially for those in the recruitment sector. It allows you to search for individuals by using job roles, education, the companies they work for, and more.

So you'll be able to skip Value Serp and search directly within our existing dataset, full of millions and millions of LinkedIn profiles (powered by LinkDB). And it's pretty darn simple to do so. Much similar to the above examples.

Searching based on job role and company

Let's say we want to hire a project manager, and we want them to be from a big tech background.

We'll use Microsoft as the source company (but we could of course use others, such as AWS), and then we can use the following Python script to search our dataset for anyone that matches being a "project manager" for "Microsoft" and return an enriched result:

import json, requestsheaders = {'Authorization': 'Bearer ' + 'Your_API_Key_Here'}api_endpoint = 'https://nubela.co/proxycurl/api/search/person/'params = { 'country': 'US', 'current_role_title': '(?i)Project Manager', 'current_company_linkedin_profile_url': 'https://www.linkedin.com/company/microsoft', 'page_size': '10', 'enrich_profiles': 'enrich',}try: response = requests.get(api_endpoint, params=params, headers=headers) response.raise_for_status() # Raise an exception for HTTP errors (e.g., 404, 500) # Print the response content and status code print("Response Content:") print(response.text) print("\nResponse Status Code:", response.status_code)except requests.exceptions.RequestException as e: print("An error occurred during the HTTP request:", e)except Exception as ex: print("An unexpected error occurred:", ex)

Here's an example profile:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (33)

It should be noted when using the enrich_profiles variable, you're limited to a page_size of 10 (which is why you see that limit above), but you can use our next_page function to pull large enriched lists with ease still.

Extracting all search results

Here's a slightly altered script that would endlessly keep going until there were no more results to return:

import requestsapi_key = 'Your_API_Key_Here'headers = {'Authorization': 'Bearer ' + api_key}api_endpoint = 'https://nubela.co/proxycurl/api/search/person/'params = { 'country': 'US', 'current_role_title': '(?i)Project Manager', 'current_company_linkedin_profile_url': 'https://www.linkedin.com/company/microsoft', 'page_size': '10', 'enrich_profiles': 'enrich',}def fetch_page(url, params): response = requests.get(url, params=params, headers=headers) if response.status_code == 200: return response.json() else: print("Error:", response.status_code, response.text) return None# Fetch the initial pageresponse_data = fetch_page(api_endpoint, params)# Loop through pageswhile response_data and response_data.get('next_page'): # Process the results here for result in response_data['results']: print(result) # or any other processing # Fetch the next page next_page_url = response_data['next_page'] if 'country' not in next_page_url: next_page_url += '&country=US' # Append 'country' parameter response_data = fetch_page(next_page_url, {})# End of paginationprint("Completed fetching all pages.")

Exporting search results to a .CSV

We could even take these results and export them to a .CSV like so:

import requestsimport csvfrom urllib.parse import urlparse, parse_qs, urlencode, urlunparse# API credentials and endpoint configurationapi_key = 'Your_API_Key_Here'headers = {'Authorization': 'Bearer ' + api_key}api_endpoint = 'https://nubela.co/proxycurl/api/search/person/'# Parameters for the initial requestinitial_params = { 'country': 'US', 'current_role_title': '(?i)Project Manager', 'current_company_linkedin_profile_url': 'https://www.linkedin.com/company/microsoft', 'page_size': '10', 'enrich_profiles': 'enrich',}# Define the output CSV file and headersoutput_file = 'enriched_profiles.csv'fieldnames = [ 'linkedin_profile_url', 'full_name', 'profile_picture', 'background_cover_image_url', 'current_occupation', 'location']def fetch_profiles(url, headers, params=None): """Fetch profiles from the given URL with specified parameters.""" response = requests.get(url, headers=headers, params=params) if response.status_code == 200: return response.json() else: print(f"Error: {response.status_code}, {response.text}") return Nonedef add_country_to_url(url, country): """Add the 'country' parameter to the given URL.""" parsed_url = urlparse(url) query_params = parse_qs(parsed_url.query, keep_blank_values=True) query_params['country'] = [country] # Ensure 'country' parameter is included new_query = urlencode(query_params, doseq=True) new_url = urlunparse(parsed_url._replace(query=new_query)) return new_urldef process_profile_data(profile): """Process and return the profile data in a dict format.""" return { 'linkedin_profile_url': profile.get('linkedin_profile_url', ''), 'full_name': profile.get('profile', {}).get('full_name', ''), 'profile_picture': profile.get('profile', {}).get('profile_pic_url', ''), 'background_cover_image_url': profile.get('profile', {}).get('background_cover_image_url', ''), 'current_occupation': profile.get('profile', {}).get('occupation', ''), 'location': f"{profile.get('profile', {}).get('city', '')}, {profile.get('profile', {}).get('country_full_name', '')}" }# Open the output CSV file for writingwith open(output_file, mode='w', newline='') as file: writer = csv.DictWriter(file, fieldnames=fieldnames) writer.writeheader() # Fetch the initial page of profiles profiles_data = fetch_profiles(api_endpoint, headers, initial_params) if profiles_data and 'results' in profiles_data: # Iterate through each profile and write to CSV for profile in profiles_data['results']: profile_data = process_profile_data(profile) writer.writerow(profile_data) # Handle pagination while 'next_page' in profiles_data: next_page_url = add_country_to_url(profiles_data['next_page'], initial_params['country']) profiles_data = fetch_profiles(next_page_url, headers) if profiles_data and 'results' in profiles_data: for profile in profiles_data['results']: profile_data = process_profile_data(profile) writer.writerow(profile_data) else: breakprint("Completed fetching and storing all profile data.")

It would export them to a file named enriched_profiles.csv that looks like this:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (34)

Exporting phone numbers and emails with our Search API

Notice, however, how there is no phone number or email...

We could use our Personal Contact Number Lookup Endpoint and Personal Email Lookup Endpoint to change that.

Here's our updated Python script, keeping the same "project manager" at "Microsoft" example:

import requestsimport csvfrom urllib.parse import urlparse, parse_qs, urlencode, urlunparse# API credentials and endpoint configurationapi_key = 'Your_API_Key_Here' # Replace with your actual API keyheaders = {'Authorization': 'Bearer ' + api_key}search_api_endpoint = 'https://nubela.co/proxycurl/api/search/person/'contact_phone_endpoint = 'https://nubela.co/proxycurl/api/contact-api/personal-contact'contact_email_endpoint = 'https://nubela.co/proxycurl/api/contact-api/personal-email'# Parameters for the initial search requestsearch_params = { 'country': 'US', 'current_role_title': '(?i)Project Manager', 'current_company_linkedin_profile_url': 'https://www.linkedin.com/company/microsoft', 'page_size': '10', 'enrich_profiles': 'enrich',}# Define the output CSV file and headersoutput_file = 'enriched_profiles_with_contacts.csv'fieldnames = [ 'linkedin_profile_url', 'full_name', 'profile_picture', 'background_cover_image_url', 'current_occupation', 'location', 'personal_phone_number', 'personal_email']def fetch_contact_info(api_endpoint, linkedin_profile_url): """Fetch personal contact info (phone number or email).""" params = {'linkedin_profile_url': linkedin_profile_url} response = requests.get(api_endpoint, headers=headers, params=params) if response.status_code == 200: data = response.json() if api_endpoint.endswith('personal-contact'): return ', '.join(data.get('numbers', [])) # Join all numbers into a single string elif api_endpoint.endswith('personal-email'): return ', '.join(data.get('emails', [])) # Join all emails into a single string else: print(f"Error fetching contact info from {api_endpoint}: {response.status_code}, {response.text}") return '' # Return empty string if no data or in case of errordef add_country_to_url(url, country): """Add the 'country' parameter to the given URL.""" parsed_url = urlparse(url) query_params = parse_qs(parsed_url.query, keep_blank_values=True) query_params['country'] = [country] new_query = urlencode(query_params, doseq=True) new_url = urlunparse(parsed_url._replace(query=new_query)) return new_urldef process_and_write_profiles(writer): """Fetch profiles and write their details, including contact info, to the CSV file.""" url = search_api_endpoint params = search_params.copy() while url: response = requests.get(url, headers=headers, params=params) if response.status_code != 200: print(f"Error fetching profiles: {response.status_code}, {response.text}") break data = response.json() for profile in data.get('results', []): # Fetch contact information email_info = fetch_contact_info(contact_email_endpoint, profile['linkedin_profile_url']) phone_info = fetch_contact_info(contact_phone_endpoint, profile['linkedin_profile_url']) # Write profile and contact information to CSV writer.writerow({ 'linkedin_profile_url': profile['linkedin_profile_url'], 'full_name': profile['profile']['full_name'], 'profile_picture': profile['profile']['profile_pic_url'], 'background_cover_image_url': profile['profile']['background_cover_image_url'], 'current_occupation': profile['profile']['occupation'], 'location': f"{profile['profile']['city']}, {profile['profile']['country_full_name']}", 'personal_phone_number': phone_info, 'personal_email': email_info, }) # Prepare for the next page next_page_url = data.get('next_page') if next_page_url: url = add_country_to_url(next_page_url, search_params['country']) params = {} # Since all needed params are in the URL, clear params to avoid duplication else: break# Open the output CSV file for writing and process profileswith open(output_file, mode='w', newline='') as file: writer = csv.DictWriter(file, fieldnames=fieldnames) writer.writeheader() process_and_write_profiles(writer)print("Completed fetching and storing all profile data with contact information.")

Which will export an enriched list of prospects, including contact and email (providing we have it available), to a file named enriched_profiles_with_contacts.csv:

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (35)

I've gone ahead and blurred all contact information, but as you can see, it pulled additional contact information for many of our earlier prospects.

Now we're really talking...

There are 100 different ways you could do this, even going as far as to directly integrate outreach channels straight into our API (sending emails, messages, automatically) and beyond.

But, as you can see, our Search API is quite powerful, and you can avoid many of the cons of a traditional LinkedIn search, or LinkedIn X-ray search.

I don't know how you got this far. Nevertheless, we do have a no-code option that anyone could use.

It's a Google Sheets extension named Sapiengraph that'll allow you to conveniently pull all of this same data into a Google Sheets spreadsheet.

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (36)

It won't have the level of functionality or customizability as our API, however, it's still very effective and worth checking out!

There are several other API endpoints we offer that could be of value to you, but I'm not going to mention every single one of them here for the sake of brevity.

The most immediately relevant not already shown above is our Employee Listing Endpoint, which works similarly to our Person Search Endpoint, but instead searches a given company for a matching job role.

Outside of that, for a full understanding of what's possible, I would read over our documentation here.

  1. You can take your newly gained knowledge and do nothing with it.

  2. You can take your newly gained knowledge about LinkedIn X-ray searches and implement it.

  3. You can take your newly gained knowledge about LinkedIn X-ray searches and our B2B enrichment APIs and fully utilize it to your advantage.

The choice is yours to make, but, if you're thinking what I'm thinking in my admittedly very biased position...

It's free to create a Proxycurl account and you start out with a few trial credits to test things out. What do you have to lose?

That said, you can click right here to create your account for free now.

If you're interested in learning more about our credit usage system and pricing policy first, you can do that here.

Whether you're looking to fill a job role, find a sales lead, or beyond, LinkedIn X-ray searches offer a scalable and programmatically friendly way to access the vast amounts of B2B data available on LinkedIn. It just requires a bit of technical know-how.

But, if you want the most convenient way, you can skip LinkedIn X-ray searches altogether and use a B2B data provider and API like ours to handle all of the headaches for you (like scraping LinkedIn profiles and Google SERPs). You can just flawlessly pull rich B2B data instead.

Thanks for reading, and here's to both more and better-quality data!

P.S. Have any questions about Proxycurl? Feel free to reach out to us at "[emailprotected]" and we'll be glad to help!

Subscribe to our newsletter

Get the latest news from Proxycurl

The Ultimate Guide to LinkedIn X-Ray Searches (2024 Update) (2024)

References

Top Articles
Latest Posts
Article information

Author: Lilliana Bartoletti

Last Updated:

Views: 6302

Rating: 4.2 / 5 (73 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Lilliana Bartoletti

Birthday: 1999-11-18

Address: 58866 Tricia Spurs, North Melvinberg, HI 91346-3774

Phone: +50616620367928

Job: Real-Estate Liaison

Hobby: Graffiti, Astronomy, Handball, Magic, Origami, Fashion, Foreign language learning

Introduction: My name is Lilliana Bartoletti, I am a adventurous, pleasant, shiny, beautiful, handsome, zealous, tasty person who loves writing and wants to share my knowledge and understanding with you.