Artificial intelligence applied to personnel selection

La oferta de trabajo que nunca entrabajo en un portal de empleo

The experts assure that the assertions that artificial intelligence can elude human prejudices by ethnicity or gender to boost diversity in the workplace, son “falsas y peligros”. According to researchers from the University of Cambridge (United Kingdom), these tools reduce race and gender to trivial data, and they are often based on personality analysis that are “pseudoscience automated.”

In the last years, IA tools appeared that were commercialized as a response falta de diversity en la mano de obra, from the use of chatbots to align potential candidates, to the analysis software for video interviews.

Quienes están detrás de esta tecnologia affirman que anula los prejuicios humanos contra el género y el origen étnico durante la contratación, utilizando en su lugar algoritos que leen el vocabulario, los patrones del habla e incluso las microexpressiones faciales para evaluar enormes grupos de requestantes de empleo en busca del typo de personality y el “ajuste cultural” adecuadores.

Sin embargo, in a new report published in the magazine ‘Philosophy and Technology’, the researchers from the Center for Gender Studies in Cambridge argue that these affirmations make some of the uses of AI in contracting sean poco más que una “pseudociencia automatizada” que recuerda a la fisiognomía o la phrenologia: la discreditada creencia de que la personality puede deducirse de los ragos faciales y la forma del cranio.

Affirman que es un peligroso ejemplo de “tecnosolutionismo”: recurrir a la tecnologia para oferer soluciones rápidas a problemas de discrimination muy arraigados que requiren inversiones y cambios en la culture de la empresa.

In fact, researchers have worked with a team of computer science students from Cambridge to discredit these new recruitment techniques by creating an AI tool based on technology, available at https://personal-ambiguator-frontend.vercel .app/

Screenshot 2022 10 11 a las 11.50.44

La “Máquina de la Personalitydad” demonstrates how arbitrary changes in facial expression, clothing, illumination and background can give radically different readings of personality, which could mark the difference between rejection and progression for a generation of applicants. de empleo que compiten por puestos de trabajo para licenciados.

The Cambridge team affirms that the use of AI to reduce the number of candidates can ultimately increase uniformity instead of diversity in the workforce, as the technology is calibrated to search for the “ideal candidate” de fantasia del empleador.

According to researchers, quienes tengan la formationa y los antecedents adequados podriano “ganarle a los algoritos” replicando los comportamientos que la IA está programada para identificar, y llevando esas atitudes al lugar de trabajo.

Además, como los algorithmos se perfeccionan utilizando datos anteriores, sosienen que los candidades considadoras más adecuadores probabilidades siendo los que más se parezcan a la plantilla actual.

“Nos preoccupa que algunos providendeos estén involving productos de ‘aceite de serpiente’ en un paquete brillante y vendiéndolos a clientes desprevenidos –asegura la coauthora, la doctora Eleanor Drage–. Al affirming that racism, sexism and other forms of discrimination can be eliminated from the recruitment process via artificial intelligence, these companies reduce race and gender to insignificant data points, instead of power systems that give shape to us movemos por el mundo”.

Los researchers señalan que estas herramientas de contratación con IA suelen suelen estar patentadas -o ser una “caja negra”- por lo que su función es un misterio.

“Si bien las empresas pueden no estar actuando de mala fe, hay poca responsabilidad sobre cómo se constructien o prueban estos productos” –admits Drage–. Como tal, esta tecnología, y la forma en que se comercializada, potriana acabar siendo peligrosas fuentes de disinformación sobre cómo se puede ‘desprejuiciar’ la contratación y hacerla más justa”.

A pesar de algunas críticas –la Ley de IA proposada por la UE clasificada el contratación impulsado por IA como de “alto riesgo”–, los inquisitors affirm that the tools manufactured by companies like Retorio and HIreVue se despliegan con poca regulación, y señalan que las encuestas suguegen que el uso de la IA en la contracting está creciendo.

Un estudio realizado en 2020 sobre 500 organizations in diverse sectors in five countries revealed that 24% of companies have implemented IA in their hiring and that 56% of hiring managers plan to adopt it in the next year.

Another survey carried out in April 2020 on 334 leaders of human resources, when the pandemic took hold, discovered that 86% of organizations were incorporating new virtual technology into their hiring practices.

“This trend was already underway when the pandemic began, and the accelerated shift towards online work caused by COVID-19 will probably make human resources departments deploy more AI tools in the future,” says the co-author, Ph.D. Kerry Mackereth, que presenta e-mail podcast Good Robot con Drage, en el que el dúo explora la ética de la tecnología.

La covid-19 no es el único factor, según los operarios de recursos humanos a los que han interviewado los researchers. “Hiring by volume is increasingly unsustainable for human resources teams who are desperate for software to reduce costs, as well as for the number of applicants who need personal attention,” says Mackereth.

Drage and Mackereth affirm that many companies now use AI to analyze the candidates’ videos, interpreting the personality through the evaluation of the regions of the face – similar to AI for the detection of lies – and scoring the “big five” tropes of personality: extroversión, simpatía, apertura, conciencia y neuroticismo.

Los estudiantes que están detrás de la “Máquina de la Personality”, that uses a similar technique to expose its flaws, affirming that, although its tool does not help users to overcome the algorithm, it will give job applicants an idea of ​​the types of AI scrutiny that they might be subjected to, perhaps incluso sin su conocimiento.

“With too much frequency, the contracting process is oblicuo y confuso – assures Euan Ong, one of the student developers.

“These tools are trained to predict personality based on common patterns in images of people they have seen previously, and often end up finding spurious correlations between personality and properties apparently not related to the image, like the light – adds Ong – Here’s a version de juguete del tipo de modelos que creemos que se usecaran en la practica, para experimentar nosotros mismos”.

Leave a Comment

Your email address will not be published. Required fields are marked *