5 Q’s for Nicholas Shekerdemian, Co-Founder and Executive Chairman of Headstart – Center for Data Innovation
The Center for Data Innovation spoke with Nicholas Shekerdemian, co-founder and executive chairman of Headstart, a platform based in London that uses AI to help companies evaluate which employees to hire. Shekerdemian discussed how AI can improve diversity in recruiting by leveling the playing field for applications from disadvantaged candidates.
Eline Chivot: How did you come up with the idea behind Headstart?
Nicholas Shekerdemian: I started the company when I was still at university, and then my co-founder joined about six months later. We built out the platform and started working with our development team to build out the platform and the early algorithms. The idea behind Headstart was about how to help young people work out what they want to do; how to empower them to use data to make better informed career decisions. That still sits in our DNA. However, over time, we realized that our technology was better suited to helping large corporations evaluate job candidates without bias and subjectivity. And so, albeit the algorithms we started to build for recommending jobs to people existed—and they still exist today—a lot of the work we do is now focused on helping large, Fortune 500 (or equivalent sized) companies take a more data-driven approach when evaluating potential candidates.
We help large corporations look past the paper element of CVs, and more towards the human element. This can include a psychometric aspect to evaluate personality, values, and motivations, but also elements that predicate potential. For instance, if a consulting firm using Headstart is recruiting for a consulting job we would consider: What does it mean if a candidate worked at McDonald’s for three years behind a counter? How is that likely to help him or her in a consultant role, what are the skills he or she is likely to have? Right now when recruiters review CVs, they look at “what the words say” and how prestigious the place where candidates worked at before was, or the university they went to. Not what they actually did with their time, or what they learned.
Chivot: Why is AI useful in the field of human resources? In what ways does it help recruiters and applicants?
Shekerdemian: AI can’t necessarily uncover every problem that exists, but what it can do is to provide an objective view on something. And so, a lot of what we do is try to help companies create consistent series of data points to work out whether someone is a good fit, so that every single candidate that goes through a process is evaluated exactly the same way. Machine learning and AI can use a large dataset and convert it into something that is digestible and manageable through one recruiting process that doesn’t require thousands of resources, but is able to produce the desired outcome without the subjectivity a human would have or different people evaluating a bunch of different CVs would have. For us, it is about harnessing as much data as possible within one single process, and it is about ensuring that there is no subjectivity in each of the different analyses, so that every candidate’s application is treated fairly and consistently.
Chivot: How does your AI-powered talent matcher work and what is it designed to do?
Shekerdemian: When onboarding a customer, the first thing we do is submit insight questionnaires to all employees of the organization. These questions are categorized by, for instance, performance or retention metrics, or whatever the organization defines as a metric of success. Those insight surveys are sent to a broad range of employees—as broad as possible, preferably—and we don’t necessarily want to only include high performing people, we want both high and low, or people who have just joined, or people who have been working there for a while. The idea is to get a mix of different people so we can try and understand values from a psychometric perspective, and also, from a skill-distribution perspective, which employees are doing well, and which traits and attributes the organization believes are high-performing and predicate success. We’re trying to collect as much data as we can, because obviously, the larger the sample, the more accurate the results.
The second thing we do is to contextualize that with the millions of publicly available job descriptions we’ve analyzed and indexed on the Internet. We did that to understand, for instance, for a role in consulting in the United States, for someone who has two years of experience, what are the things that recruiters typically require: What does it mean when they write that they want the candidate to be “driven,” or to have five years of relevant experience? What do these things mean, in the context of the role itself?
The third thing we do is to look at actions specific to the job description. And we try to understand, from the company’s perspective, what it is looking for. None of that is manual, all of that is basically collecting, amalgamating, and analyzing that data. After collecting the data, it is automatically converted into various data clusters categorizing and weighting on our different algorithms, set by the data we’ve analyzed during our onboarding process. We don’t change the algorithms, because these are constantly learning and growing over time—we’re changing the weightings that are unique to the specific role-type of the specific organization we’re working with. That essentially informs what the original fingerprint is for a specific role. We use that as the baseline analysis for working out whether a candidate is a good fit. We can then start to see who is getting hired through the process and how our algorithms are predicting the rankings of the candidates in our recruiting funnel. We’re starting to see patterns and trends. One of those patterns and trends our algorithms are observing is bias in the process, in which case we flag it to our client. Sometimes there are cases where maybe our algorithms weren’t 100 percent accurate at the beginning, which is why we always recommend to our clients that they thoroughly review each recommendation we have made—our system needs to learn what the real criteria of success are over time. We then obtain a more accurate baseline in our analysis, which we can improve and optimize over time.
All of our machine learning is built in-house, although of course we use some third-party benchmarks. So, for example, when we first built our psychometric, we based it off the “Big Five,” whose principles are among the most commonly used in psychometric assessments. We built on that baseline, by developing our own algorithms.
Chivot: What are the various “de-biasing” techniques that exist to improve datasets? What are successful examples? Why is it a difficult process, and what other solutions could help?
Shekerdemian: Our firm belief is that the best way to at least remove biases to any possible extent—bearing in mind that you can never fully remove bias from the process—is to ensure that every applicant goes through the same process. Of course that’s almost impossible in a recruiting process, because we do not automate the entire process. There are still interviews which involve human interaction and manual review. But we do try to ensure that the funnel that exists at the top of the recruiting process, when we first receive applications for the company we work with, is as broad and representative of a diverse range of people as possible before getting to the interview stage. The way we do that is by ensuring that no candidate is biased against an initial screening process at the onset of the process.
Another way is to analyze and look at the trend data, where we can identify the biases that exist in the latter part of the process, so we can flag them to our client. And finally, something that is also very important to us, is what we call contextual recruitment: This helps understand what it’s taken for you to get from where you used to be to where you are today. We look at opportunity coefficients according to the background you’ve had. Had you gone to a school where you got three As, but everyone got three As, that was the average grade: This means that you’ve done very well, but in this context, you’ve done just as well as your peers. Whereas if you had gone to a school where usually everyone gets three Cs, but you got three As, then that says a lot about you, your character, and your ability to overperform. That doesn’t necessarily negate what anyone else has achieved and how. It adds drive and other attributes that you’re likely to have as a person, includes your contextual background, and includes other achievements that might have been overlooked otherwise through a traditional process.
Chivot: What should be prioritized in terms of recruitment angle: Qualifications and skills, experience, personality?
Shekerdemian: That’s a really difficult question. It depends on the role. But I think it’s not one, single data-point. If that was the case, then recruiting would be easy for everyone. But for every role, for every organization, and for every level of seniority in the organization, everything is determined by a series of different processes, and different things mean different things in different contexts. In a legal profession, of course, your law qualifications are probably relatively important, but for a consulting role, maybe these qualifications are less important. I think what we’re starting to see are variants in the different role-types, and our system is able to emphasize and underemphasize variables depending on what our data is seeing as the key predictors of success. To use another example—of course to be a doctor, you need to have medical qualifications. You can’t get away with not having a medical qualification. But for a role in investment banking, do you need to have a finance degree? Not necessarily. You can have a degree in modern languages, or whatever else, it doesn’t matter that much. Your training while you are on the role will inform that and make the difference.
Whatever role it is, it’s always a cluster of different data-points, it depends on what the most important thing is for that role. In the example of medical practitioners, your attributes and your personality, and various other elements mean something, but they just mean less than in other professions because the medical qualification is so vital..
Chivot: How are technologies like AI changing hiring practices?
Shekerdemian: I think that undoubtedly, data as a whole will continue to optimize and improve the recruiting process. I don’t believe in a fully automated recruiting process, though. I think a lot of the technology right now, such as facial recognition, remains nascent. I do fundamentally believe that all tools that exist in the process, without the context of the rest of the data you need to make an informed decision, are not really solving the problem. You can have one tool for a video interview or you can have one tool for CV analysis, but if you don’t have the end-to-end data stream, I think it’s very difficult to make an informed decision. You can, of course, have a bunch of meaningful data, but that doesn’t mean it gives you the whole picture. That makes me nervous because the negative effect of one issue taken out of context, backfires on the entire work that’s been done in the machine-learning and data industry as a whole. I would say there is no argument as to why data should not be incorporated more broadly into recruiting processes, and thereby machine learning and AI are a natural extension of that: Because ultimately, it’s the best way to make a fair and informed decision.
With Headstart, we will continue to build out these end-to-end processes as a platform, this contextual picture of what a person is like, what he or she is capable of, to gain a full understanding of the entire picture. Without that 360 degree picture, you’d have only one piece of the puzzle: What would be the point?