← Back to Blog

I Let AI Apply to 100 Jobs for Me — Here's What Happened

I ran a real experiment: 100 job applications handled by AI tools over two weeks. Here's what the response rates looked like, what surprised me, and what I'd do differently.

By Amine Barchid·
job searchauto applyai job applicationexperimentjob hunting
I Let AI Apply to 100 Jobs for Me — Here's What Happened

I Was Burning Out. So I Let AI Take Over.

Two weeks of waking up at 7 AM to fill out job applications. Same fields, same questions, same formatting your resume information into boxes for the fifth time that morning. By week three you stop reading the job descriptions carefully. By week four, you're copying and pasting cover letters and changing the company name. By week five, you're not sure what you even applied to anymore.

That was me eighteen months ago, somewhere around application number sixty.

So I did what any exhausted, mildly obsessed software developer would do: I turned it into an experiment.

The goal was simple. Have AI tools handle 100 job applications across two weeks. Track the results honestly. No cherry-picking, no inflating the numbers, no pretending a tool was better than it was.

Here's what actually happened.


The Setup

Before diving into results, the experiment parameters matter.

Profile: Software engineer with four years of experience. Targeting mid-level backend and full-stack roles in Germany and remote positions across Europe. Salary range: €65K-€90K.

Job types: Greenhouse and Ashby ATS postings (easier to automate, more standardized), plus some LinkedIn Easy Apply roles.

Time frame: 14 days.

Target: 100 applications sent, not just "submitted to pipeline." An application counted when the confirmation email arrived or the ATS dashboard showed "applied."

Tools tested: Three different tools, which I'll get into below. I didn't stick to one because this was about understanding the category, not picking a winner from day one.

What I tracked:

  • Application volume per day
  • Confirmation receipt rate (did the application actually go through?)
  • Email responses (any kind)
  • Recruiter outreach
  • Interview invitations
  • Jobs I'd never apply to manually (bad matches)

Days 1-3: Getting Started Was the Hardest Part

This was the part nobody warns you about.

Setting up any AI job application tool takes real time upfront. You're not pressing a button and watching 100 applications send in ten minutes. The first day involved:

  • Uploading my resume (simple enough)
  • Filling out a detailed profile: skills, experience levels, preferred industries, excluded companies, location preferences, salary range, work type (remote/hybrid/office)
  • Connecting accounts (LinkedIn, email for notifications)
  • Setting quality filters so the tool wasn't applying to anything with "junior" in the title or $35K salary caps

The tools that let you configure filters properly saved me from a lot of garbage matches. The ones with minimal configuration meant I had to manually reject applications later or risk applying to roles that had nothing to do with my profile.

By day three, I had the systems calibrated. Applications started going out.

One thing I noticed immediately: the tools that ran in the browser while I was active felt much more controlled than server-side automation. I could see exactly what was being submitted. With server-side tools, you're trusting the system and checking back later, which works until it doesn't.


Week One: The Numbers

By the end of week one (days 1-7), here's where things stood:

MetricCount
Applications submitted54
ATS confirmation emails received49
Applications with no confirmation5
"Bad match" applications I had to manually reject7
Automated rejection emails received11
Positive email responses3
Recruiter LinkedIn messages2
Interview invitations1

The confirmation gap surprised me. Five applications went into what felt like a void. No confirmation, no rejection, nothing. Whether the submission actually worked or the ATS dropped it, I'll never know.

The seven bad matches were frustrating. Despite my filters, three applications went to companies I had explicitly excluded (one was a gambling platform I definitely didn't want on my resume). Four went to roles with "junior" in the job title. The tools that let me set a pre-submission review step were far more valuable than I initially gave them credit for.

The interview invitation came from a role I'd consider a stretch. Mid-level backend engineer at a Series B company in Munich. Turned out their ATS pre-screening was light and the recruiter moved quickly. The volume played a role in that outcome.

The response rate that first week: about 7.4% (4 positive responses out of 54 applications). That's not far off the benchmark research suggests for volume-based applying in a normal market.


Week Two: Quality vs. Quantity Shows Up

By week two, something interesting happened. The volume was there but the response rate changed depending on which tool I used for which application type.

The AI tools that customized cover letters and tailored responses to application questions got meaningfully better response rates than the ones that sent a static cover letter to every job. Not dramatically better, but the difference was clear enough to notice.

Here's week two's breakdown:

MetricCount
Applications submitted46
ATS confirmation emails received44
Automated rejection emails17
Positive email responses4
Recruiter LinkedIn messages3
Interview invitations2
Applications I later flagged as poor matches4

Two more interview invitations. One recruiter conversation that led nowhere but was a useful signal about what companies in that space were actually looking for. Total for the two weeks: three interview invitations from 100 applications, which is a 3% interview rate. Response rate (including recruiters who reached out) was about 14 out of 100, or 14%.


The Surprising Findings

A few things I didn't expect going in:

The quality filter mattered more than the automation speed.

The tools that applied thoughtlessly fast created more cleanup work than they saved. Applying to 100 jobs in two hours sounds impressive until you're manually reviewing 20 applications to roles you'd never want and contacting companies to withdraw. Quality filters, exclusion lists, and minimum-match thresholds matter enormously. Speed without targeting is noise.

Cover letter customization moved the needle.

I expected it not to. Hiring managers claim they read cover letters but you figure at volume, most don't. Turns out for competitive roles, a generic cover letter is still a signal that the applicant isn't engaged. The AI-generated tailored letters weren't perfect, but they were good enough to not get filtered out on the cover letter alone. That's the bar. They cleared it.

The ATS confirmation gap is a real problem.

Five applications in week one, two in week two: seven total where I don't know what happened. That's 7% of my applications existing in an unknown state. For manual applications, you'd know the moment something went wrong. With automation, there's a layer of abstraction that makes debugging harder. This is why tools with transparency into exactly what was submitted and what response came back are worth the extra setup time.

Volume without targeting gets you rejected faster, not hired faster.

The seventeen rejections in week two came almost entirely from the lower-quality job matches. Roles where the JD was off-profile, salary bands were wrong, or experience requirements didn't match. The more precisely I filtered going in, the fewer fast rejections I received. This is the core tension in job application automation: automation works better the more precisely you define what you want.

The interview rate held up better than I expected.

Three interviews from 100 applications in two weeks. Manually, in those same two weeks, I probably would have submitted 20-25 applications if I was being disciplined about it. My historical interview rate from that manual volume was maybe 1-2 interviews. The automation gave me 3x the reach and a proportionally better outcome, even if the per-application quality was slightly lower.


What the Tools Did Well

Without turning this into a full comparison post (I've already covered those in detail for LazyApply, LoopCV, and JobCopilot), a few things stood out as universally useful:

  • Profile-based auto-fill is legitimately good. The part where you retype your address, phone number, LinkedIn URL, and work authorization status for the hundredth time goes away entirely.
  • ATS standardization means most Greenhouse and Ashby applications handle consistently, which makes automation more reliable on those platforms than general-purpose form fill.
  • Saved search preferences meant I could close the browser and come back to a queue of filtered, pre-screened opportunities rather than starting from scratch each morning.

The thing that varied most across tools was the quality of job matching. Some tools find a lot of jobs and let you apply to all of them. Others are more selective about surfacing roles that actually match your criteria. The selective approach produces better results even if the raw volume is lower.


What I'd Do Differently

Start with tighter filters. I spent too much of week one recalibrating because the default settings cast too wide a net. Set minimum match criteria from day one, even if that means slower initial volume.

Use the pre-submission review window. Every application that went to a bad match in this experiment went through a tool that didn't surface a review step. Where that option existed and I used it, I caught the mismatches before they went out. The extra thirty seconds per application pays off.

Don't automate the cover letter entirely. The best results came from AI-drafted letters that I spot-checked for five to ten jobs at the start, caught the patterns I disliked, adjusted the prompts, and then let run. Full automation with zero review produced generic output. Quick human-in-the-loop calibration made it much better.

Track everything. I used a simple spreadsheet: application date, company, role, ATS used, response type, date. Without that, two weeks of 100 applications becomes an unmanageable blur. Some tools have built-in tracking dashboards that do this automatically, which is worth prioritizing when choosing a tool.

Don't stop networking. This experiment was purely about application automation. In parallel, the recruiter outreach I did manually (four LinkedIn messages to connections at target companies) produced one warm conversation that turned into an interview on its own. Automation handles volume. Human outreach handles warmth. Both matter.


Should You Try This?

If you're in an active job search and spending more than two hours a day on applications, the honest answer is yes. The upfront setup cost is real but finite. The ongoing time savings compound across weeks.

The caveat is that automation works best when you're clear on what you want. If you're still figuring out your target role or industry, automation will scatter your applications in ways that can actually hurt your positioning. Being tired of the process is real but it's a different problem from not knowing where to focus.

If you have a clear target and are grinding through applications manually, automation reduces the mechanical overhead enough to be worth it.

The tools vary a lot in quality. The ones worth using share a few traits: good job matching with real filters, transparency about what gets submitted, cover letter customization, and application tracking. The ones to avoid are the ones that prioritize speed over match quality and give you no visibility into what they're doing on your behalf.

The question of whether auto-applying gets you blacklisted comes up constantly. From my experience: no, not when the tools are operating within normal rate limits and submitting applications through standard ATS flows. The risk goes up when tools operate at extreme volume or use workarounds that ATS systems flag. Staying within sane daily limits (20-30 applications per day, not 300) mitigates that risk.


Tools Used in This Experiment

For this experiment I used three tools:

  1. ApplyGhost — This is my own tool, so take that with appropriate skepticism. I used it for Ashby and Greenhouse applications. The matching quality and cover letter customization is what I built it around, so I'm biased but also intimately aware of what it does and doesn't do well. There's a free tier (10 applications, no credit card) if you want to test it yourself before spending anything.

  2. A browser extension tool (not naming it here, but it's in my comparison of the best AI job application tools) — useful for LinkedIn Easy Apply, faster setup, less cover letter customization.

  3. A server-side automation service — produced the most volume but also the most bad matches and the most confusing tracking. Higher noise-to-signal ratio.

My overall ranking for this use case: matching quality and transparency beat raw volume every time.


The Honest Takeaway

100 applications in two weeks. Three interview invitations. A 14% positive response rate. Zero blacklistings. A lot of time saved on form-filling.

Automation doesn't replace being a strong candidate. It removes the mechanical bottleneck so your energy goes to the parts that actually differentiate you: interview preparation, targeted outreach, building a coherent application narrative.

If you're grinding applications manually right now, the experiment is worth running for yourself. The setup takes a few hours. The results start showing up in the first week.


FAQ

What's a realistic response rate from AI job applications?

Based on this experiment: around 14% across all response types (recruiters, positive emails, interview invitations). The interview rate specifically was 3%. Manual applications typically produce similar per-application rates but lower total volume, which means fewer absolute opportunities in the same time period.

Will applying to 100 jobs at once look desperate to employers?

Employers generally don't know how many places you're applying. Each application is evaluated on its own merits. The risk isn't volume, it's low-quality submissions that signal you didn't engage with the role. Good filters and customized materials mitigate that.

How long does setup actually take?

Expect 2-4 hours for initial profile setup across tools. The more detailed you are about your preferences upfront, the less cleanup you'll do later. It's front-loaded work that pays off within the first week.

What's the best tool for automated job applications?

It depends on what you prioritize. I compared the top options in detail here. Short answer: matching quality and transparency matter more than raw volume.

Does AI write good cover letters?

Good enough. The AI-drafted letters in this experiment were not impressive individually but they were better than generic templates. The key is calibrating the output before you let it run at scale. Five minutes of review upfront saves you from sending the same mediocre letter to 50 companies.

Ready to ghost the grind?

Stop filling out forms. Let AI find and apply to the right jobs for you.

Get Started Free

10 free applications. No credit card required.