Skip to main content
ATS Guide

How ATS Systems Work: Inside the Machine That Screens Your CV

Most job seekers send dozens of applications and hear nothing back. The reason usually isn't their experience — it's that their CV never reached a human. Applicant Tracking Systems process your document in milliseconds, build a structured data record from it, and rank you against every other applicant before a recruiter opens their dashboard. Here's exactly what happens inside that pipeline.

J
JOBVIAN Team
March 19, 20269 min read

Key takeaways

  • ATS software scores and ranks every application before a recruiter opens their dashboard — most candidates are filtered before any human ever reads their CV.
  • The parser strips your CV into a structured data record: extracted job title, calculated experience years, skills list, education, certifications.
  • Legacy ATS platforms use exact string matching; modern platforms use NLP — but exact keywords still outperform near-matches on all systems.
  • Two-column layouts, graphics, and non-standard section headers cause entire blocks of your CV to be misread or ignored.
  • Rejected applications are archived, not deleted — but recruiters almost never search the archive, making a low score effectively permanent.
Top 10–15

candidates a recruiter reviews first in high-volume roles

99%

of Fortune 500 companies use an ATS

6 sec

average time recruiters spend on initial CV review

82–88%

practical match score target — above this, optimise for the human reader

What is an Applicant Tracking System?

An Applicant Tracking System (ATS) is software that companies use to receive, sort, and screen job applications at scale. Rather than a recruiter manually reading every CV submitted for a role, the ATS does the first pass — parsing your document, extracting key information, comparing it against the job requirements, and assigning a ranking score.

ATS software is used by virtually every company with a structured hiring process. According to HR industry research, 99% of Fortune 500 companies use an ATS. The same data shows widespread adoption among mid-market companies — any business receiving more than 30–50 applications per role is likely using one.

The most widely deployed platforms include Workday, Greenhouse, Lever, Taleo (Oracle), iCIMS, BambooHR, and SmartRecruiters. Each has different parsing logic and matching sophistication — but they all follow the same fundamental five-step process.

Most common ATS platforms

WorkdayGreenhouseLeverTaleoiCIMSBambooHRSmartRecruitersAshbyRippling

How an ATS Actually Scans Your CV

The moment you click "Apply" and upload your CV, a deterministic process begins. It happens in milliseconds, and by the time a recruiter opens their dashboard, your fate is already sealed. Here's exactly what happens at each stage.

1

Step 1: Document parsing

The ATS extracts all text from your uploaded file. For a .docx file, this is straightforward. For a PDF, it depends on whether the PDF is text-based or image-based. PDFs created from design tools (Canva, Adobe InDesign) or scanned from paper are often partially or completely unreadable. The parser strips away all formatting — fonts, colours, layout — and is left with raw text.

2

Step 2: Section identification

The system attempts to identify which part of your CV belongs to which section: Work Experience, Education, Skills, Summary. It does this by looking for standard section header keywords. If your headers are non-standard ('My Story', 'Things I've Built'), the parser may misclassify entire blocks — putting your job titles in the education bucket, or your skills in the summary field.

3

Step 3: Entity extraction

From each identified section, the ATS pulls out specific entities: job titles, company names, employment dates, degree names, institutions, technologies, and skills. It calculates your total years of experience, identifies your most recent role, and assembles a structured data record from your document. This is the record recruiters actually search against — not your original CV.

4

Step 4: Keyword matching against the job description

The extracted data is compared against the job requirements. The ATS has already parsed the job description with the same logic — identifying required skills, preferred experience, qualifications, and weighted keywords. It scores how many required elements appear in your extracted record. Legacy systems use exact string matching; modern platforms use semantic similarity — but exact matches score higher on all of them.

5

Step 5: Scoring and ranking

Each candidate receives a match score — typically expressed as a percentage. Candidates above the threshold (often 70–80%, set per role) are moved into the recruiter's review queue. Those below are automatically archived. Unless the recruiter explicitly searches for archived candidates — which most do not — your application disappears from active consideration.

What ATS Systems Look For

Understanding what the system is measuring lets you give it exactly what it needs. These are the six primary signals that determine your extracted record and score.

Keywords from the job description

ATS systems compare your CV against the exact language in the job posting. If the role says 'stakeholder management' and your CV says 'client relationship', you may score zero — even if the skills are identical.

Job title matching

Your most recent job title is weighted heavily. If a role requires a 'Senior Product Manager' and your title was 'Lead PM', the system may not connect the two without the right keywords elsewhere.

Years of experience

Many ATS systems automatically parse date ranges and calculate total experience. A role requiring '5+ years' may auto-reject a candidate with 4 years 11 months if the dates are entered ambiguously.

Education and qualifications

Degree level, field of study, and specific certifications are extracted and matched. If a role requires 'BSc Computer Science' and your CV lists 'Bachelor of Science in Computing', some systems will miss it.

Hard skills and tools

Specific technologies, frameworks, methodologies, and tools are extracted as discrete entities. 'Python', 'Agile', 'Salesforce CRM', 'Google Analytics' — each needs to appear explicitly, in full.

Clean, parseable formatting

ATS parsers struggle with tables, columns, text boxes, headers, and footers. Content inside these elements is often lost entirely — taking key skills and titles with it.

The synonym problem is bigger than you think

A recruiter reads "managed stakeholder relationships" and understands it means the same as "built client partnerships". A basic ATS keyword scanner does not. If the job description says "stakeholder management" and your CV says "client engagement", you could score zero on that criterion — despite being the most qualified candidate in the pool. Modern ATS platforms with semantic matching handle this better, but exact keyword matching remains the norm in most mid-market deployments.

Why Most CVs Fail ATS Screening

The most common reasons for ATS rejection are entirely preventable. None of them are about your actual qualifications.

Sending the same CV to every job

A generic CV rarely matches any job description well enough. ATS scoring is job-specific — the same CV that scores 90% for one role may score 40% for a nearly identical one with different phrasing.

Using synonyms instead of exact terms

ATS systems are keyword-literal. 'Led cross-functional teams' and 'Managed interdepartmental collaboration' describe the same thing but match completely different search terms.

Two-column layouts

Most ATS parsers read left-to-right, top-to-bottom. A two-column CV is often read as one jumbled paragraph, mixing job titles with education dates and skill names with company descriptions.

Hiding keywords in images or graphics

ATS systems read text, not images. Logos, icons, skill bar charts, and infographic-style elements are completely invisible to the scanner — taking whatever text was near them down with it.

Using non-standard section headers

'My Journey', 'What I've Built', and 'Things I Know' may look creative. ATS parsers look for 'Work Experience', 'Education', and 'Skills'. Non-standard headers cause entire sections to be misclassified or ignored.

Submitting as PDF (sometimes)

Not all ATS systems handle PDFs well. Unless the job posting explicitly accepts PDFs, submitting a .docx file is safer. PDFs created from scans or design tools are especially problematic.

Each of these mistakes is covered in full detail — including what to use instead — in The CV Mistakes That Get You Rejected Before Anyone Reads Your Application.

What Your Parsed Data Record Actually Contains

When the ATS finishes processing your CV, it doesn't store your document — it stores a structured data record built from the text it extracted. This is what recruiters search against and what the scoring algorithm evaluates. Understanding its fields explains why small formatting and phrasing decisions have outsized consequences.

Below is an example of what that record looks like for a hypothetical candidate — and the specific way each field can go wrong.

Job titleSenior Data Analyst

Extracted from the header of your most recent role. The highest-weighted field in most matching algorithms — a title mismatch alone can tank your score.

Total years of experience8 years

Calculated by summing all employment date ranges in your Work Experience section. Gaps, overlapping roles, and ambiguous formats (e.g. 'Spring 2021') cause miscalculations.

Skills listSQL, Python, Tableau, Agile

Every discrete technology, methodology, and tool name found anywhere in the document — including inside bullet points. Only text content is scanned; images are ignored entirely.

Most recent employerFinCo Ltd

Company name from your latest role. Used for industry context scoring in some platforms. Abbreviations may not be connected to the full company name.

EducationBSc Computer Science, University of Leeds

Degree, field of study, and institution are each parsed as separate data points. 'BSc Computing' and 'BSc Computer Science' are not always treated as equivalent.

CertificationsNot found

If your certifications use a non-standard header, this field returns empty — even if the credentials exist in the document. The parser expected 'Certifications' and found something else.

The implication: the recruiter never sees your carefully formatted original CV during the initial screening. They see a parsed data card. Every field that is missing, miscalculated, or misclassified is a gap in that card — and a gap in your score.

Keyword-Based vs. Semantic ATS: What You're Actually Dealing With

Not all ATS platforms work the same way. The system behind a job you apply to through a corporate careers page may be fundamentally different from one used by a tech startup — and the difference directly affects how strict the keyword matching is.

There are three broad generations of ATS technology currently in active deployment. The majority of employers — particularly mid-market and large enterprise — still use legacy or hybrid systems. Semantic ATS is growing but not dominant. Treating every application as if it requires exact keyword matching is still the most reliable strategy.

Legacy

Keyword-based matching

Taleo (Oracle), iCIMS, older Workday configs

Compares exact character strings. 'Stakeholder management' and 'stakeholder engagement' are treated as entirely different keywords with zero overlap. Still widely deployed in large enterprises, government, healthcare, and financial services — which collectively represent a significant share of total job postings. This is why exact phrasing from the job description still matters even on 'modern' platforms.

Copy exact phrases from the job description into your CV. Even small wording differences — 'managed' vs 'management' — can reduce your match score.
Hybrid

Structured taxonomy + keyword

Workday, BambooHR, SAP SuccessFactors

Combine keyword matching with pre-built skill taxonomies (job families, standard skill categories). Better at connecting 'Project Manager' with 'Programme Manager' within the same taxonomy node, but still primarily keyword-driven at the ranking level. Most common in mid-market and enterprise environments across all sectors.

Use standard industry job titles and recognised skill names. Avoid internal jargon or creative role titles — the taxonomy expects conventional terminology.
Modern

Semantic / NLP-capable

Greenhouse, Lever, Ashby, Rippling

Use natural language processing and vector similarity to understand meaning rather than exact strings. A CV mentioning 'drove product adoption' may match a JD that says 'growth marketing' — depending on context. Common in tech companies and high-growth startups. Important caveat: exact keyword matches still score higher than semantic near-matches. 'Close enough' is not equivalent to exact.

Even on semantic platforms, exact keyword matches outperform near-matches. Mirror the job description language where possible — don't rely solely on synonyms.

What Actually Happens to Rejected Applications

Understanding what happens after a low score helps explain why the submission decision matters as much as the score itself — and why re-applying without changing your CV rarely produces a different result.

Rejected applications go into an archive, not a bin

Most ATS platforms archive low-scoring applications rather than permanently deleting them. Recruiters can search rejected candidates — but in practice, rarely do. Unless a recruiter is specifically filtering for archived candidates, your application will not surface again, even if a similar role opens the following week.

The score threshold is often set per role, not per company

Hiring managers can configure cut-off scores individually for each job posting. A competitive role receiving 400 applications might be set to auto-archive anything below 75%. A niche role with 20 applicants might have no hard cut-off at all. You are not competing against one threshold — you are competing against a different threshold for every role you apply to.

Most recruiters start from the top, not from a full review

Even when applications make it into the review queue, recruiters typically work downward from the highest-scoring CV. In high-volume roles, the top 10–20 candidates receive the most attention. If you score 71% and the role generates 300 applicants, you may technically pass the threshold but still not be reviewed if the recruiter finds enough strong candidates above you.

Re-applying to the same role rarely helps

If your first submission is rejected, re-uploading the same CV will typically produce the same score. The ATS parses the document content, not your application history. To improve your result, the CV content itself must change — the section headers, keyword density, and job title alignment all need to be addressed before resubmitting.

The Bottom Line

ATS systems are not trying to reject good candidates. They exist because hiring teams receive hundreds of applications for every role, and a human cannot reasonably review all of them. The system is a filter — and like any filter, once you understand exactly how it works, you can make sure the right information passes through it cleanly.

The mechanics are consistent across platforms: your document is parsed into a structured record, that record is scored against the job description, and the score determines whether a recruiter ever sees your name. The differences between platforms — legacy keyword-matching vs. semantic NLP — affect the tolerance for synonym variation, not the fundamental process.

What to do with this knowledge is covered in the next two guides: understanding what your specific score means and how to diagnose what's dragging it down, then the step-by-step process for tailoring your CV to close those gaps before you apply.

Put this into practice

Stop guessing what the ATS sees.
Let JOBVIAN show you.

JOBVIAN scores your CV against real job listings, surfaces exactly which keywords are missing from your parsed record, and rewrites your CV to close the gap — automatically, in under a minute.