Back to eBooks
Writing Outreach That Tech Candidates Reply To

Writing Outreach That Tech Candidates Reply To

Message structure, relevance, credibility, and the mistakes that get ignored

Writing Outreach That Tech Candidates Reply To

Introduction

I wrote this book because I have spent enough time on both sides of technical hiring to know how often good roles get buried under bad outreach. The problem is usually not effort. Recruiters are working. The problem is that the message does not help a technical candidate decide, quickly, whether replying is worth it.

That gap matters more now than it used to. Candidates see more polished, AI-assisted outreach, more vague promises, and more hiring noise in general. So smooth wording is no longer impressive. Clear judgment is.

In my experience, good outreach for engineers is not about sounding clever or enthusiastic. It is about making four things obvious fast: what the role is, why you chose this person, why the opportunity is credible, and how easy it is to take the next step.

This book is a practical guide to doing that. I will give you a simple structure, show you where most outreach goes wrong, and explain how to build relevance without turning every message into a handcrafted art project. If you are reading other books in this series, you will recognize the theme: less theater, more signal.

Why most tech outreach gets ignored

Most tech outreach gets ignored for a simple reason: the candidate is not reading it the way the recruiter wrote it. You may think, I found a strong profile and wrote a friendly note. The candidate is asking three faster questions: is this relevant, is this credible, and is replying worth my time?

That is a useful reset. Engineering outreach is hard by default. Ashby’s candidate sourcing report found an average response rate of 19.6% across a large set of outreach sequences, with engineering among the less responsive groups. So silence is not mysterious. It usually means the message did not clear the bar quickly enough.

The two biggest failure modes are vagueness and obvious mass-mailing.

A vague message tells the candidate almost nothing. “Exciting backend role at a fast-growing company” is filler. Backend engineering can mean product features, distributed systems, data infrastructure, internal tools, reliability work, or a monolith held together by optimism. If the candidate cannot tell what kind of work this is, ignoring it is rational.

Mass-mailing fails for a different reason. Candidates can tell when the stack does not match, when the praise is generic, or when the “personalization” is just a line copied from a profile. That matters even more now because LinkedIn’s Future of Recruiting 2025 report says 37% of recruiting organizations are actively integrating or experimenting with generative AI in hiring, up from 27% a year earlier. Candidates are seeing more polished language, not necessarily more thoughtful outreach.

Technical audiences are especially sensitive to that. In the Stack Overflow 2025 Developer Survey AI section, 46% of developers said they distrust the accuracy of AI tools, while 33% said they trust them. Experienced developers were the most cautious. So if your message sounds fluent but empty, it can trigger the same reaction as a bad AI answer: tidy on the surface, doubtful underneath.

What outreach is actually trying to do
Your first message does not need to close. It needs to make a good-fit person think: this might actually be for me.

There is another issue underneath all this: friction. Many recruiter messages ask for too much too early. A call this week, a resume, salary expectations, location flexibility, and perhaps a small blood sample. If the candidate has not decided the role is relevant, every extra request raises the cost of replying. Employ’s 2025 Job Seeker Nation Report found that 71% of candidates expect the application process to take less than 30 minutes, and 35% abandon it if it takes too long. Different stage, same rule: friction kills response.

So I think about outreach this way: the candidate is not deciding whether your job is good. They are deciding whether engaging is efficient. Your job is to make that decision easy.

The anatomy of a message that earns a reply

What works is simple. Every line in an outreach message should do one of three things: increase relevance, increase credibility, or reduce effort. If it does none of those, cut it.

Here is the structure I come back to:

A reply-worthy outreach template
Subject line

Clear, specific, and role-linked

Opening relevance cue

Show quickly why this may matter

Why you specifically

Name the evidence behind the outreach

Role snapshot

Give enough detail for a first judgment

Credibility signals

Show the role is real and worth time

Low-friction CTA

Ask for a small next step, not commitment

Subject line

Do not get clever here. Reduce ambiguity.

A good subject line tells the candidate what kind of role this is and gives one reason to open it. Usually that means role plus domain, team, or problem area. “Staff backend role, developer tooling” works. “Quick question” does not. I was tired of that subject line before half the current recruiting software existed.

If you use a technical abbreviation for the first time, spell it out. For example, “Continuous Integration / Continuous Delivery (CI/CD)” means the automated systems teams use to test and ship code reliably.

Opening relevance cue

The first sentence should orient the candidate immediately. Why might this role matter to them?

This is not the place for flattery. It is the place for fit. “I thought your background in observability overlapped with what this team is solving” is useful. Observability means the tools and practices engineers use to understand what a live system is doing in production. “Your profile is impressive” is not useful.

Why I reached out to you

This is the center of the message. You need one or two grounded reasons you chose this person.

Reference actual work, scope, or pattern. A relevant system they built. A domain they know. The kind of ownership they seem to handle. Keep it factual and restrained.

This is also where personalization either works or becomes theater. Ashby’s sourcing report found a meaningful reply-rate lift for campaigns using AI-generated personalization, and Ashby’s growth evidence reported a 48% higher positive response rate for outreach sequences using AI-powered personalization tokens. I would not read that as permission to automate sincerity. I read it as support for structured relevance. If AI helps you summarize a profile faster, fine. Your judgment still has to do the important part.

Role snapshot

Once the candidate understands why you chose them, give them enough substance to decide whether the role is worth a conversation.

A good role snapshot answers four questions quickly:

  • what the team owns
  • what problems they are solving
  • what kind of person fits
  • one or two concrete details about stack, constraints, or environment

For example, if the role touches CI/CD, explain it plainly. If it involves a Site Reliability Engineer (SRE), say that SREs are the engineers responsible for keeping production systems reliable and available. Do not assume the reader, or the candidate, is helped by shorthand alone.

If artificial intelligence is part of the work, be concrete. The Stack Overflow 2025 Developer Survey found that many developers are cautious about AI claims, and that lack of AI ranked low as a reason to lose interest in a technology. So “we use AI” is a weak hook by itself. “The team reviews AI-assisted code and is building guardrails for production use” is a real job detail.

Credibility signals

Candidates make a trust decision before they make a career decision.

The strongest early credibility signals are simple: why the role is open, who the person would work with, what the interview process looks like, and any major fit constraints. If you can share compensation range, do. If the role is hybrid, say so. If on-call is part of the work, say so.

That matters because trust is thin. Greenhouse’s 2025 AI in Hiring Report found that only 8% of job seekers believe AI makes hiring more fair, 87% say transparency about AI use in hiring is important, and 69% have encountered fake job postings. Specifics are not a nice extra anymore. They are part of basic credibility.

Low-friction call to action

Ask for the smallest reasonable next step.

Usually that means a short intro or permission to send more detail. Not “apply here,” not “send me your resume and three times for a call.” Employ’s 2025 Job Seeker Nation Report is useful here too: candidates respond better to low-friction processes, and 51% said strong recruiter communication was one of the biggest contributors to a positive candidate experience.

A simple editing test
If I remove a sentence and the candidate loses nothing useful, I cut it.

Relevance is doing most of the work

Most personalization is decorative. It proves you looked at a profile, not that you understood why this person might care about this job.

The useful question is simple: does this detail help answer “why me for this role?” If not, it probably does not belong in the message.

In practice, the strongest relevance signals usually fall into five buckets:

  • architecture experience
  • domain fit
  • stage fit
  • scope
  • current technical depth

Architecture experience means the shape of systems they have actually worked on: distributed systems, data pipelines, internal platforms, real-time products, developer tooling, or model-backed workflows. Domain fit means the problem space matters here too. Stage fit means whether they have built in the kind of company you are hiring for. Scope is what they owned. Current technical depth matters because someone who moved away from hands-on work years ago is different from someone still close to code.

A common mistake is treating every keyword as equally useful. It is not. “Used Kubernetes” is weak. Kubernetes is a platform for running and managing software containers across many machines, and plenty of engineers have touched it without owning anything substantial around it. “Built and operated an internal platform used by product teams” is stronger. One is a tool mention. The other is evidence.

The same logic applies across role families. For backend roles, the language matters less than the kind of systems work. For platform engineering, I care about leverage: did they reduce friction for other engineers, improve release confidence, or build internal tooling? Platform engineering usually means building the systems other developers use, such as deployment workflows, observability, cloud infrastructure, identity, or developer environments. For product-minded full-stack roles, “full-stack” by itself tells you very little. Full-stack usually means an engineer who works across both user-facing and server-side code. I look for end-to-end ownership and product judgment, not a long list of frameworks.

AI-related roles need even more care because titles are messy. “AI engineer,” “machine learning engineer,” “applied scientist,” and “LLM engineer” are not interchangeable. LLM means large language model, the kind of model behind tools that generate or analyze text. Personalize off the actual work instead: training models, building retrieval pipelines, shipping model-backed features, designing human-in-the-loop workflows, or running inference systems. Inference means using a trained model to generate outputs in a real product.

What to reference and what to skip
Candidate signalStrong personalizationWeak personalization
Backend profileScaled event-driven servicesUses Go and AWS
Platform profileBuilt internal developer platformWorked with Kubernetes
Full-stack profileOwned product features end to endKnows React and Node
AI-related profileShipped model-backed product workflowMentions LLMs

There is also a practical limit to tailoring. You do not need handcrafted outreach for every person. You need a few good relevance patterns for each role family, then adapt them based on the strongest evidence. That is scalable and still sounds human.

Before you reference something, ask

Credibility without overselling

Early outreach does not need your full company story. It needs to answer a simpler question: is this real, and is it worth my time to reply?

The strongest credibility signals are concrete and boring in the best possible way.

Say what problem the team is working on. Explain why the role exists now. Anchor the role in real people and context. Be honest about the technical challenge. Include any major constraints that affect fit.

That sounds obvious, but a lot of outreach still dodges basic information in favor of adjectives. “Revolutionary.” “Game-changing.” “Cutting-edge.” If the role is genuinely interesting, it does not need costume jewelry.

A candidate should not have to schedule a call to learn the company name, compensation approach, interview shape, remote policy, or whether the role includes on-call. On-call means the engineer is sometimes responsible for responding to incidents outside normal hours. You do not need to front-load every detail, but mystery is rarely persuasive.

This matters especially in technical hiring because many candidates are already skeptical of polished language around AI and process. Greenhouse’s 2025 AI in Hiring Report found that trust in hiring has decreased for many job seekers, with 42% blaming AI directly. Stack Overflow’s 2025 Developer Survey AI section also shows that developers, especially experienced ones, are cautious about AI output. So if a message sounds smooth but oddly empty, many engineers read that as synthetic rather than professional.

Small wording changes that improve credibility
WeakStronger
Exciting chance to join a fast-growing teamHiring now because the platform team is taking on reliability work after product expansion
Work on cutting-edge AI innovationBuilding tools that help engineers review and ship AI-assisted code safely
Collaborate cross-functionally in a dynamic environmentWork with the product lead, designer, and two senior backend engineers on core workflow features
Let’s jump on a quick call for detailsHappy to send team, process, and compensation details first if helpful
What to include in the first message

Common mistakes that quietly kill response rates

Most ignored outreach is not failing because the market is impossible. It is failing because the note makes the candidate work too hard to understand, trust, or answer.

Here are the mistakes I check first.

Too long

If the first message needs scrolling, it is probably doing too much. A candidate should not have to dig through company history, benefits, and founder philosophy to find the job.

Generic praise

“Impressive background” and “your profile stood out” are filler. Replace compliments with one specific reason you reached out.

Vague role scope

“Interesting backend opportunity” is not a role description. Name the actual work.

Decorative detail

Shared schools, hobbies, or hometowns are rarely what earn a reply. Employ’s 2025 Job Seeker Nation Report found that easy process, focused interviewers, good conversation skills, and strong questions shape candidate impressions far more than superficial similarities. In outreach terms, genuine role relevance beats decorative personalization.

Premature urgency

“Hiring ASAP” often lowers trust unless you explain why the timing matters.

High-friction CTA

Do not ask for a long screening call, an application, and updated documents before the person has even decided the role is relevant.

Hidden process

If your process includes assessments or automation, clarity is better than vagueness. This is especially true in technical hiring. HackerRank’s 2025 Developer Skills Report found that 66% of developers prefer practical coding challenges, while many say traditional assessments do not reflect real work. It also found that developers worry AI makes assessments easier to game. If your process is practical and grounded, say so.

Common mistakes and better edits
MistakeWeak versionStronger version
Too longI wanted to share details about our company, team, mission, funding, and role.We are hiring a backend engineer to improve developer tooling for our platform team.
Generic praiseYour background is very impressive.I reached out because you have built payment systems at scale.
Vague scopeThis is a great opportunity in engineering.The role focuses on service reliability and incident reduction.
Founder chaos languageWe thrive in ambiguity and wear many hats.The team is small, priorities shift, and this role will help set process.
High-friction CTAPlease apply here and send times for a 45-minute intro.If relevant, I can send a brief overview and answer questions first.
Edit before you send

A practical outreach workflow you can use tomorrow

Good outreach gets easier when you stop treating it like inspiration and start treating it like a light process.

Mine is simple: start with the role, choose the strongest fit signals, draft from a template, check for relevance and credibility, send, then review results by pattern.

Start with the real role, not the job description after six rounds of committee seasoning. What problem is this person coming in to solve? What kind of engineer is most likely to care? For most searches, I want domain fit, technical fit, and sometimes one context signal such as stage, scale, or team shape.

Then choose only the strongest signals you can support. If the role needs someone who has built internal developer tools, say that. If it needs a senior backend engineer who has led architecture in a regulated environment, say that. In my experience, relevance is usually subtraction.

Draft from the same basic structure each time: why I am reaching out, why this person, why this role might be worth a reply, and one easy next step.

Before you send, do a credibility check. Are the claims specific? Can the candidate verify them quickly? If AI helps you summarize profiles or draft variants faster, use it. Just do not let it invent certainty.

Follow-up matters too. Email is usually the cleanest format for a detailed note. LinkedIn can work if the message is short and grounded. Follow up once or twice if you have something useful to add, such as clearer scope, process details, or a sharper reason for fit. Do not “just circling back” your way into the bin.

Finally, track patterns, not omens. Compare reply rates by role family, seniority, channel, and message variant. Employ’s 2025 Job Seeker Nation Report found that strong recruiter communication materially improves candidate experience. So your job is not finished when someone replies. Speed and clarity after first contact count too.

A simple outreach workflow
1Clarify roleName the real problemand fit signals2Choose evidencePick 2 to 3 reasonsthis person fits3DraftUse a short structure,not a fresh reinvention4CheckCut vagueness, hype,and friction5Send and reviewTrack patterns by role,channel, and variant
Daily outreach checklist

Conclusion

If I had to reduce this whole book to one rule, it would be this: write the message so the candidate can make a fast, informed maybe.

That means less polish, less praise, less mystery. More evidence, more clarity, less effort required to reply. In technical hiring, relevance does most of the work, credibility does the rest, and verbosity mostly gets in the way.

I have seen recruiters improve outreach quickly once they stop trying to sound impressive and start trying to be useful. The good news is that this usually makes the message shorter, easier to write, and easier to scale.

If you are reading through the rest of this series, you will notice the pattern. Better hiring communication is rarely about saying more. It is about making the important things easier to see.

Glossary

  • AI (Artificial Intelligence) — Software systems that can generate, analyze, or classify content and are increasingly used in engineering tools and hiring workflows.
  • AI-assisted code — Code written partly with help from AI tools rather than entirely by a human developer.
  • Architecture — The high-level structure of a software system, including how parts of it are designed and work together.
  • Backend engineer / backend engineering — Engineers and work focused on the server-side parts of a product, such as data processing, business logic, and system performance.
  • CI/CD (Continuous Integration / Continuous Delivery) — Automated processes that help developers test and ship code reliably and frequently.
  • Data pipeline — A system that moves and transforms data from one place to another for storage, analysis, or product use.
  • Distributed systems — Software systems that run across multiple machines or services and must coordinate reliably.
  • Event-driven systems — Systems where actions are triggered by events, such as a user action or a message arriving from another service.
  • Full-stack engineer — An engineer who works on both front-end and back-end parts of a product.
  • Inference — The process of using a trained machine learning model to generate predictions or outputs in a real product.
  • Internal developer platform — Internal tools and systems that help engineers build, deploy, and maintain software more easily.
  • Kubernetes — A platform used to run and manage software containers across many machines.
  • LLM (Large Language Model) — A type of AI model trained on large amounts of text and used for tasks like writing, summarizing, or answering questions.
  • Machine learning — A way of building software that learns patterns from data rather than relying only on fixed rules.
  • Model-backed feature — A product feature that depends on a machine learning or AI model to work.
  • Observability — Tools and practices that help engineers understand what a live system is doing and diagnose problems.
  • On-call — A work arrangement where an engineer is responsible for responding to incidents outside normal working hours.
  • Platform engineering — Work focused on building internal systems and infrastructure that make other engineers more productive.
  • Product-minded — Describes an engineer who makes decisions with user needs, business goals, and product trade-offs in mind.
  • Retrieval pipeline — A system that finds relevant information from a dataset or knowledge base before passing it into an AI or software workflow.
  • SRE (Site Reliability Engineer) — An engineer focused on keeping production systems reliable, available, and performant.
  • Stack — The combination of programming languages, tools, frameworks, and infrastructure used to build a product.
  • Technical depth — How close someone is to hands-on technical work and how deeply they understand the systems they work with.
  • Technical fit — How well a candidate’s actual engineering experience matches the technical demands of a role.
  • Role family — A broad category of jobs with similar patterns, such as backend, platform, or full-stack roles.
  • Human-in-the-loop — A workflow where people review, guide, or correct outputs from a system rather than leaving decisions fully automated.