Gaining insights without a mention of statistics or machine learning
Last week, I found myself talking to a colleague about the various analytical projects we have been involved over the years. Needless to say, the conversation ventured down a rabbit-hole of complicated statistical analyses, technical terminology and super interesting discoveries. By the end of the conversation, I made sure to bring up one specific analyses I performed a few years ago which uncovered a storyboard of actionable insights and made no mention of statistical significance. No mention of correlation or confusion matrices. No mention of TensorFlow or recurrent neural networks.
About two years ago a friend of mine had attended a SuccessFactors conference at which I had presented the power of analytical tools native to the SuccessFactors platform. After the conference he snagged my attention for a few mins and asked if I could potentially help him with a hiring problem he was having. Despite his best efforts, he was having a tough time converting high potential candidates to hires. The problem was a significant roadblock to a short-term corporate strategy as it required a diversification of skill-sets as the company prepared to enter a new product market. I had asked him to send over his data and I would take a look.
Upon receiving the data, I decided to dissect the recruiting process into three separate parts; the application, the applicant and the requisition.
First, I examined the application process by deriving metrics such as “Application Completion Rates”, “Application Drop Off Rates” and “Avg Time to Complete the Application”. To my surprise, the completion rates were above 80%, dropped off rates were minimal and it took on average 3 mins to submit an application. The problem certainly did not reside with the application process.
Next, I turned my attention to the applicant. Which sources were driving the applicant pools? Which sources were producing quality applicants? Having large applicant pools significantly increases the probability of hiring quality candidates. The results of the analyses produced mixed results as the applicant pools were sufficient but highly skewed. Three out of 17 candidate sources were driving 80% of the applicants and most of the quality applicants (ie. candidates which made it to the hiring manager interview stage). These results did not necessarily point me in any direction which might solve the underlying hiring issue but I uncovered potential efficiency and cost-saving factors. These smaller insights are very common in a data mining exercise.
Next, I began to analyze the requisition data more closely. First, I examined how long on average it took to fill a requisition (Time to Fill was defined as Req Fill Date — Req Posting Date). I specifically focused on the technology requisitions as they had the most immediate impact on the business strategy. To my surprise on average it took over 50 days to fill a req (ie. 1-to-1 ratio of reqs to positions) a metric which did not waver month over month for the last six months. What was more troubling was the sub-5% hiring rate for the same six months. These results definitely validated the concerns my friend had outlined but what was causing these issues? In demand technology candidates can often have two or three offers on the table and timing was critical. I discovered that the company employed 8 recruiters which managed anywhere from 15 to 25 requisitions which isn’t necessarily a large workload but when time is of the essence this is definitely part of the problem.
I focused my attention on the hiring process itself to better understand what was causing such drastic time to fill times. Using a candidate funnel analysis, I had uncovered a 12 step hiring process! The candidate drop-off rates between stages and average time in stage were all within the norm until the candidate landed in the “Assessment” stage. Out of those candidates who made it to this stage only 4% were being moved to the next status “Create Offer”. Furthermore, the vast majority of the candidates who were dispositioned had voluntarily terminated the application process! Examining the average time in stage it took on average 16 days from the time a candidate was assessed before they made it to the “Create Offer” stage and another 6 days to actually receive an offer. I had uncovered the final piece of the puzzle which was impacting the hire conversation rates.
The ultimate culprit was candidate satisfaction. Recruiters were forced to handle too many requisitions which meant they were over-burdened with the number of candidates. Hiring additional tech-savy recruiters would spread the workload and allow candidates to be interviewed faster ultimately moving them through the process more efficiently. Furthermore, I recommended a revamp of the application process in order to reduce the number of stages a candidate had to endure before getting an offer. This would speed up the hiring process without significantly impacting the quality of candidates. Providing recruiters with training and a standardized behavioral assessment would help to remove bias in the selection process, produce similar quality candidates and speed the candidates through the pipeline. Next, I recommended a detailed review of the offer creation workflow in order to streamline the process and reduce the number of days it took for a candidate to receive an offer. Finally, in a cost saving insight, I recommended the company to further examine the source budget allocation since the vast majority of candidates were arriving from only a handful of sources. One of the most potent sourcing avenues, referrals, was not utilized to its fullest potential.
We very often stumble upon inspiring articles of companies utilizing advanced technology and statistical know-how to derive insights which make an enormous impact on the organization. The knowledge, skills and sheer amount of data required for some of these analyzes simply boggles the mind. That does not mean actionable insights cannot be gleaned from a single data source and a savvy analyst using Microsoft excel. Having a keen understanding of the data, metric design, technological know-how and contextual knowledge of the problem can go a long way.