The Secret to Filtering 100+ Applicants Without Losing Your Mind

The Secret to Filtering 100+ Applicants Without Losing Your Mind

We’ll tell you outright—it’s automations. Specifically, automations built and set up to do the heavy lifting during the testing and evaluation of applicants. But here’s the thing: automations only work if they’re set up with intention, starting from the very moment you design your assessments. In this post, we’ll walk you through the nuts and bolts of our hiring system at Wise Scout. Read on to learn how we filter hundreds of applicants efficiently without sacrificing quality, and how you can do the same.

IT STARTS WITH THE TESTS

One thing we would like to emphasize is that the processes of test selection, administration, and filtering go hand-in-hand, and can even be considered three smaller parts of one long process. 

In our previous blogpost Our Step-by-Step Process for Hiring Offshore Rockstars, we laid out how we conceptualize a roster of assessments suitable for a specific role. This involves going back to the job description and identifying 3–5 priority traits and gave the following example traits for an ideal bookkeeper:

  1. High general intelligence

  2. At least intermediate computer and internet literacy

  3. At least some accounting or finance background

  4. Intermediate to advanced mastery of spreadsheet applications like Excel

From here, we select or craft assessments to measure these qualities using assessment platforms like Criteria, TestGorilla, iMocha, HiPeople, etc. It is these software that have the automations we need to manage the early, deep candidate pool.

As candidates take the tests, the assessment platform scores them automatically and generates detailed reports that highlight each candidate’s strengths, job fit, and percentile rankings based on norm groups.

TESTS WE COMMONLY USE

The Criteria Cognitive Aptitude Test (CCAT) or the Universal Cognitive Aptitude Test (UCAT) are our general aptitude assessments of choice. Both tests gauge an applicant’s overall problem solving, critical thinking, information processing and application, and skill acquisition.

The main difference between the two is that the UCAT doesn’t test verbal ability and verbal reasoning, so it’s language-independent and ideal for non-native English speakers and international candidates. Since some CCAT questions can be disadvantageous for non-native English speakers, we reserve this test for roles that are heavy on communication e.g., customer service, executive assistance, managerial positions. For many clerical jobs, the UCAT will suffice.

For non-client-facing roles that require a good level of English proficiency like email support and social media management, we sometimes administer the UCAT and the Criteria Language Proficiency Test - English (CLPT-EN). The latter measures English reading, writing, and listening skills. If we’re looking for a writer, we might let applicants take both the CCAT and the CLPT-EN.

Among skills tests, the Computer Literacy and Internet Knowledge (CLIK) test is one of our favorites. As the name suggests, this assessment measures how familiar an applicant is with basic computer and internet functions. Given that remote staff accomplish the entirety of their tasks on a desktop or laptop, operating a computer and browser with at least intermediate proficiency is a must.

COLLECTING INFORMATION AND DOCUMENTS

Assessments aren’t the only things we can automate with these software. Most of them also allow the convenient collection of CVs, resumes, and essential applicant data via digital application forms.

Designed by vectorjuice / Freepik

THE GREAT FILTER

Our automated hiring machine in full throttle looks like this: my employer profile receives dozens of application messages by the hour on the hiring platform. For those who followed correct instructions as specified on the job post, we reply with a secure link to the assessments. (Note: If you say on the job post that you’ll be replying to their messages with a link to assessments, they’ll likely click it.) Meanwhile, on the assessments platform, test results are coming in which you can monitor through a central dashboard if you’ve set up your automations right.

Most assessment platforms have some version of a central tracking mechanism through which users can see how many candidates are at a specific stage and who they are. We sometimes refer to this as a “job pipeline” and it can be simply represented like so (the stages are usually customizable as well):

You ultimately decide

The handy thing about these automations is that they’re only meant to expedite the evaluation process by eliminating the repeatable menial tasks. So it’s still you who decides what to do with your applicants. So how do you make the first cut?

At this point, it’s good to review the 3-4 priority skill sets you had in mind when curating assessments. Since the tests were created and selected assessments based on these concepts, these should be your evaluation filters as well.

Recall that earlier, we conceptualized an ideal bookkeeper to be: 

  1. Generally intelligent

  2. Adept at using a computer

  3. Familiar with our bookkeeping tools

  4. Intermediate to advanced in their mastery of spreadsheet applications

We then created a lineup of tests to measure these characteristics.

Now we have their results laid out neatly before us via the assessment platform. Narrowing down the pool involves taking your highest priority characteristic (e.g., familiarity with bookkeeping tools) and setting a cutoff score for the test that measures this. This will serve as the first filter. Repeat for the next priority skills. We need not make cuts for every test, but as long as you have a clear justification for doing so, there are generally no wrong answers.

That said, these decisions are by no means arbitrary. If we want an executive assistant with a good head on their shoulders above everything else, we might want to use a general aptitude assessment as the first filter. If we need to fill a data-entry role, we might filter based on a typing accuracy test first.

How much to cut?

We try to reduce the candidate pool to the top 75th-80th percentile and above—or around 15-20 people—after 2-3 filters. Beyond this point, the margins between candidates are smaller, but it’s still possible to whittle down the list using quantitative methods. At Wise Scout, we do this manually by adding our experience questionnaire into the mix. This is a custom test we create to gauge candidates’ experience working similar jobs and/or the software we use for our day-to-day operations.

To do that, we quantify their experience questionnaire answers using an in-house scoring system and export these scores and answers to a spreadsheet like Excel or Google Sheets. These spreadsheets also contain the basic information and assessment results of shortlisted applicants. This spreadsheet’s main purpose is to extract the top ~10 out of the remaining 15–20 candidates. It’s also handy for taking down notes and comments during the interview stage later.

Here’s what their basic info and assessment results (with experience ratings) look like after a little computing and data wrangling:

It’s now up to us how many of the top applicants we’ll advance to the next stage, but we try to keep the number below ten. In the example above, a solid demarcation would be a total score of 100 as seven people reached that threshold. That should be a good number of interviewees.

Bonus: Running background checks on Onlinejobs.ph

If we want to know more about our candidates or trim down the number a little bit more, we consider running a background check. Premium accounts on Onlinejobs.ph can run these incognito. Using a premium employer account, simply go to an applicant’s profile and click on the Background Check tab. You’ll see the following information:

Onlinejobs.ph and Facebook profiles 

Any detected Facebook accounts of the applicant are compared side-by-side with their OLJ profile, which can be useful in verifying a person’s identity. Most Filipinos with internet access have an active Facebook account.

Related accounts

We can see other OLJ accounts that have logged in using the same device as the user. Most of the time, these profiles belong to the person’s family members, spouse, or friends. We look out for accounts with similar names or profile pictures as these might be duplicates used by the person to apply for similar jobs. Alternate accounts can also be used to make employers think the job seeker isn’t already employed elsewhere.

Employment history

This shows how many jobs the applicant has worked via OLJ hiring. We can see if they’re currently employed based on the dates and duration of employment, and even contact previous employers to verify information. 

Jobs applied for

The account owner’s entire application history on OLJ. Someone constantly applying might be working more than one job or doing gigs/contractual work. If they indicate they’re looking for a full time job on their About page, it could mean their job search hasn’t been successful. You wouldn’t want to see someone you’ve already hired full time being active here.

Edit Logs

Shows dates on which the account owner edited their profile. Someone consistently editing their information over long periods could be trying to appear on as many search results as possible.

For a visual guide to the information provided in the background check, read this annotated sample created by OLJ themselves.

The difference between a chaotic hiring process and a smooth one is the system behind it. At Wise Scout, we combine our expertise with industry-leading assessment tools to make sure you’re only talking to the best candidates.

 

Previous Next

Leave a comment

0 comments