The New Era of Interviewing: Reducing Bias from Start to Finish

Decades of research have consistently shown that well-developed and executed interviews are among the strongest predictors of a successful hire. Asking the right questions for the right hires is vital – but do hiring teams truly know which questions will lead to the best outcome?

Join us to learn:

  • How the pandemic led to significant shifts in the hiring process – especially with interviews
  • The most formidable challenges in interviewing
  • How new HR technology – like AI-powered automated interview tools - can boost your hiring results by identifying best-fit candidates in a fast, consistent, fair, and engaging process.
To view full transcript, please click here

Grant: [00:00:18] Good morning, everybody. We are at the top of the hour, but I see a number of participants are still logging in. So we might just give it a minute before we before we kick things off here. All right. I see the number of participants seems to be levelling out now, so I presume everybody is. And who will be joining us today? So, so welcome to our webinar entitled The New Era of the New Era of Interviewing. Are you ready for fairer hiring? So I'd like to introduce my colleague, Eric Sydell, who's our EVP of Innovation, who will be co-presenting with me today. And my name is Grant and I lead the APEC region for modern hire. Before we jump into the actual presentation, just some housekeeping. If anybody does have any questions, please pop those into the Q&A section and we'll have a look at those. And we're we're happy to try to answer those as we progress through the through the presentation. And if we don't get to all of them, we'll obviously respond to those post the session. The question that we always get is will will the session be recorded? Which it will be. And we'll be distributing that to to all all people who have registered for the session as well. And then finally, obviously, we're going to be covering off a number of issues on the topic today. But if you follow modern higher on on LinkedIn, you'll be able to see this presentation and a lot more as well. So I encourage you to jump on and follow Modern Higher as well. So before we get into into into the topic, I thought maybe just a little bit about modern hire. So we're an organization comprising psychologists and technical folk who specialize in helping organizations with their talent acquisition processes. And that's reflected in the team makeup where we have over 45 PhD qualified psychologists and we have just as many and more technical folk who help us bring together this combination of good science with technology. And in order to deliver our solutions and we deliver those solutions to over 700 clients across the world, and we have over 30 million interactions with our platform every year. Some of our biggest clients who interact with us put over 20,000 candidates per day through our platform. And so we've we feel that we're able to stand to the rigors of those very large recruiting organizations in terms of being able to deliver our services. But when we talk to our clients, we find that kind of grappling with four key areas when it comes to talent acquisition. And we feel that modern hire is uniquely placed to be able to deliver on those four problems that most organizations are dealing with from a talent acquisition perspective, from an efficiency perspective, we continue to be focusing on helping organizations deal with those high volumes of candidates, trying to reduce that time to full and really getting recruiters focused on what they do well. And that's interacting with candidates as opposed to getting caught up in a lot of the administrative activities associated with hiring. From an effectiveness perspective, it's about identifying those candidates, those top candidates early in the process and potentially fast tracking them through your hiring process, looking to obviously reduce that early stage turnover and obviously the costs that are associated with that. From an ethical perspective, it's about fairness. And fairness is absolutely central to everything that we do that's making sure that our solutions are job relevant, but also taking into consideration issues like diversity and inclusion, making sure that we comply to all the legal requirements with regard to fair and ethical hiring. And then finally, from an engaging perspective, it's just making sure that the hiring process is is a positive experience not only for candidates, but also for hiring managers and recruiters. So everybody involved in that hiring process. So the question then is how do we deliver on the promise in terms of being able to meet all of those for E's simultaneously? And as I mentioned earlier on, it's about bringing our science, which we call cognition. That's our science brand together with technology to to implement those solutions or have those solutions which are grounded in strong science but are delivered in an efficient and candidate friendly manner. But it doesn't stop there. It's about continuous improvement as well in terms of optimizing our solution. So while we have a number of standardized solutions, it's about working with our clients to continuously improve those to make sure we're making the strongest predictions of job success when interacting with candidates. Obviously, that optimization applies to our platform as well, with weekly updates to the platform that takes place, and our services are really wrapped around wrapped around that model. So some of the services that we provide are in the black writing and the yellow boxes around that model. So it could be text interactions with candidates early on in the process, looking at minimum qualification criteria like work rights, like qualifications and getting responses quickly from candidates through a text based solution could be. Were realistic job previews, sharing information to candidates about the organization, about the role so that everybody is informed and are on this journey of mutual discovery as we progress with candidates through the hiring process. Our virtual job part of our assessments to obviously identify the top performers early on in the process. Our on demand interviews, which we'll be talking about in more depth today, those asynchronous recorded videos which are very common in hiring processes today. Self scheduling, once again helping recruiters, letting the technology deal with the administrative tasks and getting recruiters speaking to candidates and then obviously our live interview platform as well. So when we look at those for ease, those four challenges and we look at it through a lens of the interview process. So job interviews have been around for as long as I can remember. But even today they some interview sometimes fall short in terms of delivering on those challenges through the hiring process. So from an effectiveness perspective, very often the interviews lack well-developed competency based questions and more important lack of objective evaluation criteria from an efficiency perspective. A lot of organizations just don't have the resources to have people create well-designed questions because we're too busy focusing on candidates and ensuring that they're well looked after through the selection process from an ethics perspective. Job relatedness is obviously critical, and we all know the problems associated with unstructured interviews. They just don't predict accurately, although they still use very commonly in hiring processes today. And then finally, poorly structured interviews or incorrect interview questions in a in a hiring process can give candidates a poor experience which may lead to them obviously opting out of the process. So it's at this point that I'm going to hand over to to Eric to share some slides around a bit of the history about interviewing. And then he's going to take a deeper dive into some of the problems associated with interviewing as well, or bad interviewing, should I say? 

Eric: [00:08:14] All right. Thanks, Grant. Can. Can you see my screen? Okay, then. Okay, great. All right. Thanks, everyone, for joining again. My name is Eric Sydell and I am an industrial psychologist by background. So I'll talk a little bit about sort of the psychology that goes into interviewing and making good quality decisions about other people. To start out a little bit, we just want to talk about the history of interviewing. You know, interviewing is one of the oldest tricks in the book when it comes to hiring. And I'm pretty sure that as long as there's been humans, there's been interviewing. So I think it's a pretty, pretty ancient kind of technique where people talk to each other and try to determine whether one person thinks the other person is a good fit for something. You know, in this case, we're talking about jobs, but you can also interview people for lots of other lots of other things. And so through most the vast majority of recorded history, you know, interviews were just conversations. They were just unstructured conversations between two people chit chatting about who they are and trying to get to know each other, not what you would call a scientific type of experience. And lo and behold, when modern scientific methods started to come in and look at the effectiveness, the validity, the predictive power, the fairness of interviews, they didn't find much. There wasn't much there. And so around the 1980s, 1990s or so, we started to see a new approach to interviewing come into play. And that was structured behavioral interviewing. And this is a type of interviewing where we basically try to make it very objective, very rigorous, very structured. And so you as an interviewer would select from standard questions that have already been written. You're not making up your own questions. You're asking questions that are written down. And then when you hear a response, you're rating that response on a numerical scale using anchors, anchors that indicate example, good and bad and medium quality answers. So the idea with structured behavioral interviewing is to sort of make you the interviewer into a computer, make it make it more objective, make it more rigorous. And so that that is sort of the standard interviewing technique that was developed in the eighties or so. And it still is it's still is the best way for one human to interview another human without the assistance of technology that is spoiler alert. But so so most people should be doing structured behavioral interviewing today. And if not, then they should should learn how to do it. When scientific researchers look at those interviews, what they find is that, lo and behold, they actually work very well to predict performance, to predict success in a job, and they are relatively fair. There's a level of fairness to them that is greater than the unstructured interviews in the days of yore or the Wild West of interviewing there. But there's still bias. There's still bias in any sort of human decision. And we'll talk a little bit more about that. So fast forward to the last several years, really, and all of a sudden the world has changed dramatically due to AI, due to algorithms and artificial intelligence. And you see it coming in every aspect of the economy, not just hiring cars that drive themselves and smart speakers that understand what you're saying and respond to you and things of this nature, all driven by algorithms and AI. And so this this type of technology is very transformative, and we're using it in hiring now as well. And what we're doing is we can now take what a person says in an interview, and we can transcribe that into words automatically, and then we can score those words for meaning. And so it's a very exciting time. We're going to talk more about that technology and show you some examples. But suffice to say, at this point, with with modern AI, we're able to score what a person says. And once we can score it, we can study it statistically and find out how well it works and whether it's biased or not. And so, you know, the the short answer on this is that it works very well and that there's a lot lower level of bias to automatic scoring that we've developed than you would see with humans. So we'll talk a little bit more about that as we move forward. I want to talk about really what is my favorite topic right now, not because I like it, but because it's just so interesting. And that's bias. Bias. Humans are, I think of humans as bias machines and our brains have all kinds of different mechanisms that help us process information. And when they do that, they the decision that results is biased in a lot of different ways. So the growth of AI and big data has really led to sort of a renaissance in studying bias because it has made it so obvious how much bias is out there in the world. And so while that while it's bad, you know, and it's unfortunate to see all the bias, it's also fortunate that we now can see it because now we can do something about it. In the United States, we have a lot of legal guidelines that focus on protecting people of various classes race, color, religion, sex, national origin, disability, veteran status, genetic information is protected, as is citizenship. So we keep adding we keep adding more things to the list. But bias is a much bigger problem than just those defined classes. And there's three areas here that I think are really interesting that I want to point out. There's been a large growing conversation lately around neurodiversity. This is people who might be on the autism spectrum, Asperger's, that type of thing, or maybe have ADHD, attention deficit disorder or various different types of learning disabilities. And those people are oftentimes underemployed. They don't have jobs at nearly the rate that others do that people who are not neurodiverse do. And at the same time, they also have a really amazing strengths that have a lot of times gone unnoticed by employers. And so, for example, problem solving skills and focus and all kinds of things can be very, very strong with people of neurodiverse backgrounds, depending on what type of neurodiversity they have. So those are people that can contribute a lot to various different roles out there. So that's a that's a growing area of interest and especially when we talk about candidates scarcity. And so many of our clients don't have enough candidates right now. It's very difficult to find people for four jobs. And so look at these categories. Look at these other categories that maybe you haven't considered as much in the past. Another one is criminal backgrounds. This is interesting because a lot of times when someone has a criminal background, they're automatically eliminated from consideration for a position. But that should not be the case, that people should not be automatically eliminated just because of a random criminal background that they may have. And a lot of research actually shows that people with criminal backgrounds can be amazing employees, very loyal, very hardworking. You know, a lot of times they may be more grateful for a job than people without that type of background. That doesn't mean you should hire everyone with a criminal background, but it does mean that it should not be a blanket disqualification, that you should look into it. And then the other one is disabilities and disabilities are hard to study because there's so many of them. There's it's not one thing. There's all kinds of different disabilities. And so you really have to look at them individually. But there's a lot, a lot a lot of talented people who have various disabilities who can contribute a lot to organizations as well. And so those are three categories that I like to point out I think can be very helpful, especially for companies right now that are looking to expand who who they are able to look at for hiring purposes. Then real quick. 

Grant: [00:16:47] I might just just jump in there. There's a question that's come through which kind of relates a little bit to what you were speaking about. So apologies for the interruption, but there's a question around just an interesting understanding, a little bit more about current legislation versus potential future legislation and how that attention will impact AI and automation. So the services that we offer and I know in the US you're probably a little bit ahead of us on the legislation front, so it would be good to hear your thoughts on that, please. 

Eric: [00:17:16] Yes. Well, you're a day ahead of us on the calendar, but we're ahead of you in terms of legislation. So, yeah. So I think that and you see this a lot in Europe is very advanced on this as well. But essentially there are cities, states, countries all over the world that are working on guidelines about how I should be harnessed in decision making, especially in what we would call high stakes decision making, which is who you're hiring. And in my view, as a tech provider, as a developer of AI tools, you might think that I am opposed to that, but actually I love it and I think it's appropriate and the exact and exactly what should be happening. You know, I think back to previous errors in history where like, you know, when we started using coal to. Make things to burn coal that ended up with a ton of pollution. And so in the United States, we had the Environmental Protection Agency that came in and said, whoa, whoa, whoa, hold on, you can't do that. We have to have standards around this. And right now we're seeing that with AI is coming out. It's a very, very powerful technology and it's so powerful, in fact, that it needs to be sort of harnessed in the proper way so that it benefits humans, not just organizations, but also individuals. And so I think that's what that's what's going on now. You will see a lot of legislation, some legislation that doesn't make that much sense and will and is not very clear and may be kind of ambiguous. And that legislation will hopefully get worked out. We'll iterate on it and it'll begin to make more sense. And also hopefully as we go along, different countries and different states and different cities won't have all their own different, unique legislation, but we'll sort of come together to establish some basic sort of principles. And so there's there's a lot there. But suffice to say that at modern higher we're proponents of legislation around AI because it needs to be controlled. And there are a lot of things that I shouldn't be used to do. For example, we don't do facial recognition, which is a broad category of AI. It's very invasive. It's not even proven to predict anything. It's biased, and it's got a lot of bias built into it. So we don't do that. We drew a hard line in the sand to say, we won't do that. We're also conducting audits of our own systems right now to prepare for future audits that might be conducted by legislative bodies. So we're we're approaching this from a very careful standpoint. I don't think that nobody's going to legislate AI out of existence, but they are. And they do need to establish some sort of guardrails on what can be done with it. In my personal view, I should only be used in a way that helps the human condition, not in a way that seeks to control people. You know, you do see applications where it's used to control what people do, what they where they're looking know, are they paying attention? What are they doing? What's the behavior? And to me, those are invasive and I would rather not see that sort of application. So and for us, certainly at Modern Hire, we're focused on applications of AI that can help people get a job, that can help you as an employer figure out who to hire quicker and so forth. So sorry, I could probably just we could make that another webinar. There's a lot there. I hope that helps. All right. So back to the topic of bias. Know this is this is so fascinating. I think this is a this is a graphic that just shows a lot of the studied human biases that are out there. And I know the words are so tiny that you can't really read them and I can't either. But there's kind of several main categories here, and this sort of describes how our brains process information. We have biases that help us decide what should we remember because we're faced with so much data and information all the time. There's too much information out there. So we have biases that help us zero in on particular aspects to pay attention to. Then there's there's a lot of information out there that is ambiguous and then we don't know what it means. And so we have biases that help us figure out what these different pieces of information mean, but not always in a logical way, not in a rational way. And then we have the need to act quickly based on a limited set of information that we might have. We might not know all the facts, but we still have to make a fast decision. How do we do that? We don't do it like a computer. We don't study and wait everything and make a super rational decision. We make an emotional decision. So we have all these sort of biases built into our brains, how our brains work. No kidding. Of course, when we make decisions about other people, those decisions tend to have a component of bias to them. And no one is is bias free. Now, you could say that a lot of people, most people hopefully in the civilized, modern world are not racist. But you can't say that people aren't biased. Everybody's biased. You know, everybody's got a lot of bias just baked in fundamentally. And so technology can help us with that. It can help us get beyond those those biases. When you look at it just real briefly, in a larger context, like outside of hiring, I think this is why this topic of bias is so exciting, is because all throughout the world, not just hiring systems, but you can look at financial, financial systems, legal systems, prison recidivism, all so many, so many different areas are increasingly being influenced by algorithms. And those algorithms are oftentimes scaling up human bias in a way that impacts the larger the larger population, the larger world. And so that's why it's such an interesting time to be studying bias and to be trying to root it out and get rid of it, because there's it's a golden age of sort of us realizing how much bias there is and then hopefully being able to find it and get rid of it as much as possible. Now, from a corporate standpoint, why is it important to think about bias and to not be biased? Well, one of the reasons is that diversity. Is important for organizations to have organizations that are more diverse perform better. And this is research. This is McKinsey Research, McKinsey and Company research on screen here. But this has been done a lot over and over again and found repeatedly at this point. Organizations that are more diverse tend to perform better. And one of the main obvious reasons why that is, I think, is that when you have more diversity, you have more diversity of thought. And so you don't have as much groupthink. You know, a leader doesn't just say, yeah, we're going to do this, and it's not really a great idea, but everybody else just goes along with it, right? You have different opinions, different perspectives, different backgrounds. And all of those things can come into play and and help organizational innovation and decision making be much, much stronger. So I want to take a look now at one of the. Research findings that we've seen here at Modern Hire, which I think is pretty interesting. This is the result of years and years of deployments by us of assessment systems and interview systems for various clients in five different industries. The first one, store manager, that's retail health care banking call center operations and operations management. And so what you can see here are the percentages of of employees of new hires in each of these four categories of race white, Hispanic, black and Asian. And before they were using the system from us and then after. So this is comparing before and after. And what you can see is the white category has gone down across the board and all the other ones have gone up except for in one case there. So what this indicates is that there's a large amount of additional balance and diversity in the organizations that are using these types of scientific hiring techniques. Now, one thing I want to point out is that at modern hire, we don't specifically hire people just because they because they're of a diverse category. You know, that's not something that our systems do. What our systems do is they hire people who are likely to perform well in the job based on years and decades, actually, of scientific study. And when we're studying how to do that, how to predict who's likely to be successful in the job, we also look at demographic data. So we're able to over time, factor out take out questions that have group differences, that show group differences. And so the reason you see these types of results now is, again, it's not because we're specifically trying to hire people in various diverse categories. It's because we're factoring out the bias over time. So all of the systems that are deployed here predict performance, and the organization's performance goes up as a result of hiring better quality people and doing so fairly. Finally, I'll talk a little bit about some some research that we've done, survey research, where we have found a few different things relating to the attitudes that candidates have about artificial intelligence. So first of all, about almost half, 44% of candidates say they've experienced discrimination in the hiring process. Most of those who said they have experienced discrimination were not white. And then 56% of those candidates that have experienced discrimination believe I might be less biased than human recruiters. And 49% believe I might improve their chances of getting hired now. I wish those percentages were higher the last two, but it actually kind of makes sense to me that they're not because there's a ton of confusion about what I even is. And I think if you asked ten random people on the street what AI is probably like, six of them would say it's a scary robot sent back from the future to kill me and the others. I don't know. So most people think of AI as something omnipotent and omniscient and kind of scary and all knowing and super smart. And that is really not what modern AI is at all. All AI is, is statistical analysis tools that allow us to process complex data like words and images and things like that. It's really a bad name. It shouldn't be called artificial intelligence because it sounds very scary when it's not. So anyway, I think it's appropriate that not all candidates are comfortable with AI yet. I can't blame them. They don't study it all day and know intimately what it what it does and how it's used. And I think that's our responsibility to help educate candidates so that they don't view it as something that is scary. But actually, as we've talked about, something that in our data is actually going to help them get a job in a in a more fair manner and also faster as well. So that is let's see. And I see there's a question here. I'm about to turn it over to Grant again, but I'll just take a look at this. How do you reduce or eliminate bias in the programmers of the AI technology? Yeah, I mean, when an AI technology is programmed, the programmer is making decisions about what data to include and how to score it and all these types of things. Right. And so you have to look at each individual technique that's being used in the data. Each technique has a slightly different set of potential biases and ways that bias can creep into it. And so I think what the way we've found to do it is it's a process where we have teams of people doing it, doing programming, balancing each other out, reviewing each other's decisions and so forth, things like that. Now, at the end of the day, though, I mean, at the end of the day, I think what really matters is the observed results from a system. And you can calculate is there bias there and the observed end result of the system and you can monitor it over time as well. In fact, I think, you know, systems that make decisions about people should be monitored on a near continuous basis, if that's possible to do. And when you do that, then if there is any group difference, if there ever is any difference that you spot, you can immediately address it. So there's a lot of different techniques there, though, that we could go into more, but I'll again save that for another webinar. So I'm going to I'm going to stop sharing and turn it back to Grant here. 

Grant: [00:30:23] Thanks. Thanks very much, Eric. And just while we're trying to transition back to the slides, I thought I'd just make a couple of additional comments there just regarding that question as it relates to legislation and I mean, some of the things that you were talking about there as well, because here in APAC, we perhaps a little bit further behind than some of the other geographies when it comes to actually legislating issues around AI probably falls into this category of what's good practice and maybe legislated at a point in time in the future. But I think some of the things Eric was touching on there was just about being open about your eye and in terms of how it works, where it's been used as well is absolutely critical from a good practice perspective and then also disclosing to candidates, I think that's a key one. So in terms of the modern solutions, whenever AI is being deployed, we're letting candidates know and in many cases we giving that opt out option as well. So I think we all know that AI is running behind the scenes in so many things, especially on social media, for example. And you may just click on something and the next thing you've seen advertisements come up on relating to the very same thing that you just clicked on, and you sort of say, How did that happen? What's going on here? So when it comes to hiring processes, it's just absolutely critical to let candidates know when the AI is being used and how it's being used in the process. But anyway, I digress a little bit. But in terms of the next few topics that we want to talk about is some of the modern high solutions where AI is used and and and where these specifically relate to our interview based solutions as well. So everything starts with our 11 factor competency models. So we've got this competency model, which really has two levels. So we have this top level of 11 competencies which can be applied to, to, to basically all the jobs around. But then there is a second level of competencies which can add a little bit more granular detail around the competencies. But everything we do starts with a job analysis and we look at what is important for the job, which we then relate to this competency model as we design a solution. So it might be the objects which are our assessments, but the two that we're going to talk about in a bit more detail now is our automated interview creator and our automated interview scoring. So as we sort of start looking at these kind of solutions, looking at both sides of it, one is designing your interviews along with best practice guidelines. And then the second part, which Eric will talk about again, is actually scoring those interviews and obviously using our AI technology as well. So when we talk about automated interview creator or AIC is what we call it is is really what what is that about? And it's really it's nothing more than a robust interview question library that has been developed by our subject matter experts, psychologists have put that together. The number of questions that runs into the thousands and tens of thousands which are available in this library. But where the AI becomes important is about being able to identify and select appropriate questions based on the role that you may be recruiting for or the job, family, the competencies, etc.. Why is it important? Well, guess what? An interview is part and parcel of just about every hiring process that that that exists today. And it's about being able to help teams, maybe teams that are a bit short on resources be able to create the best interview questions that are appropriate for a particular role. So we go back to those for some of the problems that we solve is all that hiring teams need to do is simply provide a job title and we can create some questions based on that. So obviously efficient from that perspective delivers not only the questions but the rating scales as well. And that's often something that's missing from those interviews. So we've got to create appropriate questions. But how do we actually measure the candidate's response from an ethics perspective, obviously grounded in good science and by the individuals who have put these questions together. And good questions tend to relate to an engaging experience by candidates. Or perhaps a converse is more true. Poorly crafted interviews or bad questions can obviously have a negative impact on that candidate experience. So in summary, AIC is about this interview library with tens of thousands of questions and AI driven search engine, which is ultimately going to enable you to deploy the appropriate questions at different points throughout that hiring workflow. So if we look at that just from a technology perspective, some screenshots here from the platform in the middle of the of the slide there, you can see recruiters using our platform. They can add one of their own questions. They obviously have their client specific question library, which is based on templates. But here on the side, you can select a question from the the AIC technology which we have deployed on the platform and then it gives you a series of opportunities to select, well, what is it that you're looking for? So it could be as simple as putting in a job. Title. It could be adding a job. Family could be looking at their level of role. It could be looking at competencies or skills and experience. And perhaps more importantly, it's a combination of those who might put in a job title with specific competencies, and the eye gets to work to be able to generate appropriate questions relating to those categories that you've thought out in terms of what your requirements are. If you're focusing on competencies, you'll see the competency with a range of questions that are provided, and you don't need to use the questions as they presented. You can adapt those questions to obviously apply to your specific set of circumstances, but at least it gives you some ideas of well crafted interviews, interview questions. It will also, based on your search requirements, give you some prescreen, all those kind of screening out questions that you may want to ask early on in the process, and similarly, some skills and experience based questions. So all of those link back to your initial search in terms of what you're looking for. And then it's it's a case of picking and choosing from those those different options that are provided to enable you to start crafting scientifically generated interview questions. Perhaps the most important part is, is obviously the behavior and ranked behavior and code rating scales. So every question in our library has these attached to them. So those are there that if you as a recruiter or you want to share this with your hiring managers just to provide some structure around responses, we know hiring managers often give us simple thumbs up and thumbs down. They want to process the candidate, but in the world of keeping good records or being able to justify decisions, obviously having these rating scales is absolutely critical and through the technology enables those hiring managers to provide their feedback based on the actual ranking skills. So that's a little bit about the setting up of interviews, but I think the more the fun stuff starts now is when we can actually start using AI to actually score those interviews. And, you know, Eric won that coin toss, so he gets to talk about the cool stuff in terms of how we can score those interviews using our AI technologies. I'll throw back to Eric. 

Eric: [00:37:31] All right. Thanks Grant. Let's see here. I think I've got the right one. You can see that. So. Yes, so. So we're going to talk a little bit now about what we call automated interview scoring. And this is where I guess for short, that's a lot of syllables. This is where we're actually using AI to score what a person, what a person says. And this is a feature that is available in our on demand interviews, not the live interviews, but the on demand ones. And candidates can respond to those interviews via video, via audio or via text. It doesn't matter which they choose because we're only actually using the text anyway. So if they're if they're doing a video interview, again, we're not using what they look like or what they sound like. We're transcribing the words into text. And what this does is it presents recommendations and ratings to recruiters. And so we'll talk a little bit about those. But the important thing is everything we do is grounded in those four E's efficiency, effectiveness, ethical and engaging. And so we've tried to balance out these types of solutions to make sure that they hit all of those AIS. First of all, can be used to give recommendations. This is where it provides a question level recommendation. It allows clients to mix and match questions and different response types. And then it can also provide what we call ranked recommendations. And this is where it provides an actual overall interview score and competency scores. And it can be used to rank order candidates to basically sort your candidate pool top down based on those who scored high versus those who scored low. 

Eric: [00:39:25] And so, you know, I'll talk just briefly about the science behind all of this and how we developed it. So first of all, we sort of started with large samples of data, of interview data, and we had SME's or subject matter experts rate each interview response on different job relevant competencies using behaviorally anchored rating scales. So basically made it as rigorous of a rating process as we could. And then we we trained AI models to predict those same SME ratings. So we were using the computers to replicate the ratings provided by these subject matter experts. And then we tried all that out on a separate set of data that's called Cross Validating Your Algorithms. So this is all very large sample of of interview responses that we use to do this. And, and as a result, since then, we also continually are looking at it and working to improve it as well. But that's essentially the process for how these things are built and. You know, the cool thing is if you when you step back and think about this is this is really a computer beginning to understand what a person says, what they mean when they're saying something in an interview context, in a very specific interview context. It can't understand if you ask it what kind of paper towels you should buy, it won't know what you're talking about, but it can evaluate your responses in a highly specific interview question type of format. So this is this is A.I. and this is the beginning of of the computer being able to understand what we say when we speak, which is very exciting. I think now when you go through the process as a candidate, you have you have to consent to your data being processed by AI, and you also have the ability to opt out of AI scoring if you would like to. Now, the screen on the right there is the opt out screen. And the reason that that is there, it's actually a legal requirement to have that be there. Candidates have to be able to opt out. Of scoring in the United States because of a law in one state, Illinois. And so it's, as I was mentioning earlier, in response to a law question, is very interesting times where you have different states, different cities enacting their own laws. But as a big company that operates maybe around the entire country or world, you know, if one big city like New York City has a specific rule, they basically will make their whole entire company conform to that, just for simplicity's sake. So you see a state or a city can have an outsized influence on on how AI is handled and governed globally even. So anyway, in this case, most candidates consent. Very few candidates don't consent to using AI in the scoring process. And that's a good thing, because as I've mentioned, the AI is very fair and it's more fair than humans tend to be. And so I'll give you a few quick examples of what this can look like in different environments. This is a smartphone, obviously, and so this is what a person would see going through that process. They see some informative text in the first screen there and then some brief instructions, and then they can actually type a response on their smartphone if they would like right there. And then there's another set of screens here that shows what it looks like if they're speaking a response. So these are the instructions they would see if they're speaking, you know, the response right into their smartphone. And then this one is video. This one is if they're doing video right into their smartphone. So each one of these is slightly more involved because it requires more resources of of the phone. But you can see that it works regardless of the interview modality that you choose. But again, we take out all of the information about what a person looks like or sounds like. And then this is the recruiter experience. This is what a recruiter would see when using AIS on the left side. Here we see questions that Lindsey, a candidate, answered, and you can click on each question and see the video of her response. And then you can also see the behaviorally anchored rating scales below, as well as a suggested score for her, a suggested score from the computer. And then on the right, you see additional information where you can you can type in your own comments and things of that nature. So we've tried to really combine the best of the AI scoring with a lot of information that allows the recruiter to sort of seamlessly integrate that rating from the AI with their own review of that candidate. Now there's also people, as I mentioned, who can opt out, who will choose to opt out of AI scoring. Again, not a lot of them, but a few. What we do when someone opts out is we actually put their profile right at the top of the sorted list of candidates for you. And the reason we do that is because we don't want to penalize somebody. Who decided against being scored by the AI and bury them way down in the rankings and then you'd never see him. And then in effect, if somebody said, I don't consent, then you'd never see them. And that would be bad. That would then. Then they would be harmed by not wanting to use the AI. And we want to make sure that nobody is harmed from this process. So we actually put them at the top and then the recruiter can see them, note that they don't have the score and go in and take a look at them, their selves themselves. And then there's lots of flexible ways to sort of sort candidates and figure out which ones you want to review further. You can search for scores above certain certain thresholds and things of that nature. And then this is the summary report that you would see for an individual candidate that sort of presents their overall fit for the job. It's their email address and so forth. Different information there. You can also view their detailed responses, the actual text responses that they that they gave you during the interview process. If you'd like to go further and evaluate and review those as well. You can listen to them if they recorded their interview. Or you can watch the video if they recorded a video. And then just further information here, lots of detailed competency results as well. You can click on the responses for each of the questions and see how they relate to all these job relevant competencies. And then sort of a final evaluation form that you can complete as a recruiter and whether you recommend them or not for hire. So now those are the example screens I wanted to show just a couple other things that are interesting about AIS here and then we'll sort of wrap it up and I see there's a few questions hopefully we can get into as well. But as you can see from this big table of information here, it's AIS. This is basically indicating the fairness of AIS. It is very fair across different demographic groups with very low differences between groups. Our goal for those numbers is that they be below point to the effect size of 0.2, which I believe all of those are the negative ones means that the minority group is actually slightly higher in that in that particular case. And then on the right side of the screen, when you have people conduct ratings of other people on, let's say, a 1 to 5 scale, what always happens is everybody gets a rating of four or three. Right. And hardly anybody gets an extreme rating. And that means that the the quality of the ratings is not very good because there's not very there's not enough variance AIS by default has perfect, almost perfect variance across all the different rating levels, which makes it a tool that's much more useful in a statistical sense than just normal human ratings. So that basically takes us to the content. And I want to just point out one more thing, which is shameless self-promotion at this point. But we do have a book that we have written and is available in bookstores and on Amazon since March. It's been available. It's called Decoding Talent How A.I. and Big Data Can Solve Your Company's People Puzzle. And this is not in any way a commercial for modern hire. It's really a vision that we've had for a number of years on how AI and data can fundamentally improve our organizations. And so what we've tried to do is paint a picture of what the future HR organization looks like when it comes to data and analytics. So if you would, check that out. And now let's take a look at the questions and comments that are here. Give me a second here. And Grant, if you have any that you've noticed as well, feel free to shout them out. 

Grant: [00:48:57] Yeah. I think, Eric, there were a couple of questions around voice clarity and I've tried to answer some in real time in terms of converting video to, to text. I was just wondering in terms of different countries if there's anything to add in terms of potentially accents, etc. Any any comments you would have to add to that? 

Eric: [00:49:17] Yeah. Yeah. You know, we use a transcription service and we are constantly looking at these services and how well they work and evaluating them to make sure we're using the one that is the best and the least biased and so forth. You know, again, bias is everywhere, it's in everything. And so I, you know, I don't approach this from the standpoint of we're going to eradicate bias. That's not a realistic goal at this point for anybody who says that a tool is bias free, is not being honest at this point. What we need to do is continually collect and study the data and root it out. And so, yeah, there are there are issues at times with with the transcriptions are not always perfect. I think there's I don't know how many different accents and dialects that are all over the world, but oftentimes these systems tend to be trained on the more dominant accents and dialects. So over time, they're getting better. They're getting better. And so it's it's something that's in progress. And I think at the end of the day, though, the thing that I want to point out that's so important is that we monitor the outputs of these systems. Let's say you have a transcription system or some sort of scoring. That's not that's that's got bias in it. But if you're able to monitor the outputs, you will see that some groups are scoring higher than other groups. And then you can go in and figure out why and fix that. So I think that sort of outcome monitoring is a very, very important part of the whole equation. And keep in mind as well, that humans are extremely biased. So, you know, it's it's sort of a low bar, I guess, in terms of doing better than that. So a lot of this technology is is very helpful in reducing bias, even though it's not perfect yet. So let's see what else can. I see just one question. So can the scoring happen in a live interview done via Microsoft Teams or something like that? And it cannot be done that way yet, but hopefully in the future that will become something that is possible. Right now we're only offering it for recorded interviews. Let's see what else here. There's a question here. Does the data vary a great deal between scoring a written response and scoring a transcribed verbal response? Yeah, I guess I kind of hit that a little bit already, but it doesn't very much. No, there is. There's a small amount of variation there, but we typically don't see very much. 

Grant: [00:52:05] Eric, I think we probably covered them more because we have responded to a few other questions in writing as well while while you were presenting. But if there's any others that catch your eye that you want to respond to. Feel free to do that. But I think for for those attending the session will obviously respond to some of the questions and post the session as well when when the webinar recording is made available as well. I probably bought you a minute there Eric, if there's any others that you. Yeah. 

Eric: [00:52:34] I think that's good. A good stalling so I could read something. I see. The first of all there's one about NLP saw NLP referenced a couple of times and we mean natural language processing, which is so not neural linguistic programming. Sorry, we should be more more clear about that. Natural language processing is sort of a machine learning technique that's used in scoring of the responses that candidates give us. And then there's a question about. Let's see. Do you suggest we prepare? How do you suggest we prepare hiring managers for the I way or I guess, unbiased way of hiring? You know, I think that, as I mentioned earlier, like, no one knows what I even is. So we have some education to do, you know, and I really don't I really hate the term AI because it makes it sound like a scary robot. And it's not. It's just statistical analysis software that allows us to score things like unstructured information and images and so forth, sounds that we couldn't before. So, I mean, that's why it's so revolutionary, but it's not because it's magic or anything like that. So I really think we have to demystify what AI is in our training. And so we just have to talk to people about what the capabilities of these tools are and what they aren't. You know, they're not magic. You know, they can seem like magic sometimes and it's can seem very powerful. But I think we just have to train people and educate people on how to combine the best of what a computer can do with the best of what a human can do. Because we're certainly not to the point where computers should make all the decisions for us. 

Grant: [00:54:19] Eric. I'll just say there's a slightly off topic question there, which I'm happy to answer from Daryl that just on asking around integrations with ATS. So the short answer to that is we do have some pre preconfigured integrations with ATS, especially the large ones, but we we have about ten or so that are already built and that can be configured. But then through the use of open APIs, modern hire can integrate with pretty much any, any ATS. And then I think when it comes to those live environments, through the through the pandemic, lots of candidates have got used to zoom and teams and even WebEx, and we have integrations with those platforms as well in answer to that question. 

Eric: [00:55:07] Great. Well, you know, if anyone wants to further geek out over AI or any of this stuff. Feel free to reach out. Definitely. Any time. 

Grant: [00:55:18] Excellent. And thank you, everybody, for joining today. And as I say, keep an eye out for all communications on the topic. And we really appreciate you joining today to listen to what we what we had to say. Thanks, everybody. 

Eric: [00:55:32] Thank you.