In August 2023, the Equal Employment Opportunity Commission (EEOC) announced a settlement with iTutorGroup, Inc., a Chinese-based tutoring company. The settlement was the result of an EEOC lawsuit alleging that iTutorGroup’s hiring software had discriminated against older job applicants.
The EEOC alleged that iTutorGroup’s software automatically rejected applications from anyone who was 55 years old or older. The software did this by using a statistical model that had been trained on a dataset of past hiring decisions. The model had identified age as a “predictor” of job performance, and so it automatically rejected any applications from older applicants.
The EEOC’s lawsuit was the first of its kind to allege discrimination in the use of AI in the hiring process. The settlement is a cautionary tale for employers who are considering using AI in their hiring decisions. It is important to be aware of the potential for AI to discriminate, and to take steps to mitigate this risk.
Here are some ways that AI can cause discrimination in the hiring process:
- Bias in the training data. If the training data used to train an AI model is biased, then the model will be biased as well. For example, if the training data only includes data from young, white men, then the model is likely to discriminate against older, minority applicants.
- Lack of transparency. It can be difficult to understand how AI models make decisions. This can make it difficult to identify and address any potential biases in the model.
- Inflexibility. AI models are often inflexible. This means that they may not be able to adapt to changes in the workforce or the job market. This can lead to discrimination against certain groups of applicants.
There are a number of things that employers can do to mitigate the risk of discrimination in the use of AI in the hiring process:
- Use a diverse training dataset. The training data used to train an AI model should be as diverse as possible. This will help to ensure that the model is not biased against any particular group of people.
- Be transparent about how the AI model works. Employers should be transparent about how the AI model makes decisions. This will help to build trust with applicants and make it easier to identify and address any potential biases.
- Use AI in conjunction with human judgment. AI should not be used as a replacement for human judgment in the hiring process. AI should be used as a tool to help employers make better decisions, but it should not be the only factor that is considered.
The iTutorGroup AI discrimination lawsuit settlement is a reminder of the importance of using AI in a responsible way. Employers who are considering using AI in their hiring process should take steps to mitigate the risk of discrimination. By following these best practices, employers can use AI to make better hiring decisions while still ensuring that everyone has a fair shot at getting a job.
Here are some additional thoughts on the role of AI in HR best practices:
- AI can be a valuable tool for HR professionals, but it is important to use it wisely. AI should be used to augment human judgment, not to replace it.
- When using AI in the hiring process, it is important to be transparent about how the AI works and to be aware of the potential for bias.
- Employers should also have clear policies and procedures in place to address any potential discrimination issues.
By following these best practices, employers can use AI to improve their hiring process while still ensuring that everyone has a fair shot at getting a job.