As a friend, AI holds promise for promoting diversity, equity and inclusion. Yet, as a foe, AI deserves a cautious pause given limited data availability, algorithmic biases and lack of AI literacy. This technology has so much potential to both benefit and harm society that the Biden administration announced new actions to promote responsible AI innovation to protect Americans’ rights and safety.
Given these dual paths, I see four use cases for this technology in driving DEI: descriptive, diagnostic, predictive and prescriptive. Descriptive explains what has happened: pulling a report describing preferential hiring or promotion practices based on factors irrelevant to predicting future performance. Diagnostic helps to understand why something happened: identifying “pockets” in the organization with higher complaints/claims of discrimination. Predictive shows what is likely to happen, often based on probabilities: informing a decision to fund one type of DEI training versus another. Finally, prescriptive recommends decisions and actions: uploading a job announcement to detect any gendered and/or racialized language and then automatically correcting it.
AI as a Friend of DEI
AI can be used to inform more objective decision-making based on past data rather than on surveys, focus group insights, expertise, C-suite and board members alone. For instance, leveraging AI to recognize the optimal combination of skills for specific roles rather than solely relying on the hiring managers.
Similar to how orchestras rely on blind interviewing, AI algorithms make decisions based on competency and skill sets that are required for jobs without knowing information about applicants that is not job related such as hair style. These tools can also be used for more objective decisions regarding promotions and compensation. Given the labor shortages confronting healthcare today and high voluntary turnover, AI can also provide predictive power regarding which employees are more likely to be retained and what factors are driving voluntary turnover among high performers.
AI as a Foe of DEI
Algorithmic bias is the most well-known risk of using AI because it has the potential to perpetuate existing biases. Remember, AI is only as good as the data that it is trained on. This caution embodies the adage “garbage in, garbage out.” Another risk is a mismatch between the AI literacy of the team selecting vendors and evaluating the impact that AI technology has on DEI. A related risk, especially among DEI and HR practitioners, is that AI will be used primarily as an efficiency tool to address a high workload and administrative burden rather than as a tool to drive the strategic outcomes of a DEI initiative.
With that, here are three suggestions for leaders considering the use of AI to promote DEI in their organizations.
Proceed with caution. Any AI efforts should be grounded in ethics, focused on strategy and results driven. It is imperative to align AI with each DEI stage of the DEI Maturity Model. Ella F. Washington, PhD, professor of practice at Georgetown University’s McDonough School of Business, developed the DEI Maturity Model in an article that appeared in the November/December 2022 issue of The Harvard Business Review. Washington says there are five stages from least to most mature: aware, compliant, tactical, integrated and sustainable. Leaders within and outside of DEI ought to consider planning AI into DEI work at the tactical stage but robustly infusing AI at the integrated stage.
Beginning with the compliant stage, AI would be deployed in a way to minimize legal and regulatory risks. At the tactical stage, AI tools would be leveraged to make meaning of existing patterns (descriptive). At the integrated stage, AI tools would be leveraged to attribute causality or gain insights (diagnostic); to advise decisions and cost and resource allocations (predictive); and to make tailored recommendations for patients, employees and the community specific to the organization (prescriptive), assuming a robust data infrastructure and skilled workforce in AI. Finally, at the sustainable stage, AI would be “hardwired” into the way the organization operates and used as a tool to augment and enhance human decision-making, and also to drive efficiencies.
Mitigate risks by data mine sweeping. To mitigate risks, organizations must first know what possible risks exist across the landscape: legal, regulatory and reputational. Legal risks could range from malpractice suits to discrimination suits. Regulatory risks could include not complying with emerging guidance on AI from the Equal Employment Opportunity Commission. And reputational risks could arise from postings on a company ratings website about the use of AI related to processes, decisions and perceptions of how diverse workers are treated. Remember, the status quo is not risk free. As such, move forward but be equipped with knowledge about incorporating AI into the organization’s enterprise risk management framework.
AI can also be used as a DEI enterprise risk management tool. This can be done by creating a centralized repository of data that could even remotely have an impact on DEI, and then mining for any patterns that may arise but without first forming any hypothesis. This is particularly the case with unstructured data, such as incident reports, hotline complaints and exit interviews, in contrast to structured data such as surveys and turnover reports.
Build a robust network of internal and external AI experts. Healthcare leaders do not need to be an expert in AI and DEI to effectively, efficiently and ethically deploy this increasingly everyday technology and soon-to-come enterprise solutions to advance the DEI agenda at their workplace. Internally, DEI can be further resourced, supported and scaled by automating DEI tasks such as audits, analysis and even reporting using generative AI tools such as ChatGPT. Externally, the use of AI tools from vendors can also advance the organization’s DEI strategy as well as do some of the heavy lifting if under-resourced or faced with competing priorities.
Before signing a contract with an AI vendor, carry out due diligence and even conduct a premortem, asking questions such as the following:
- What may go wrong and how can I minimize or prevent that from occurring?
- How did I train the AI model? What data did I use? How do I know the data used to train the model does not have existing biases?
- What is the accuracy rate of my predictive models? How did I measure this?
These questions, although not exhaustive, are a way to engage in a smart way with AI vendors and to ensure that the organization is using the right tool for the right reason at the right time for the right investment.
In essence, the integration of AI into DEI initiatives demands a cautious yet proactive stance. As organizations move forward into an era of AI-driven decision-making, being grounded in ethics, focused on strategy and committed to driving positive results will be pivotal for realizing the full potential of AI as a force for diversity, equity, inclusion and health equity.
William F. “Marty” Martin, PsyD, is professor of Management & Entrepreneurship, faculty director, and Research & Innovation Leadership Fellow at DePaul University, Chicago (martym@depaul.edu).
Blind Interviewing: Color Blindness vs. Multiculturalism
Blind interviewing is not without controversy and requires deliberation among those making these hiring and promotion decisions. Past research reveals the tension between hiring based on knowing, valuing, appreciating and factoring into the selection process the identity of the candidate (i.e., multiculturalism) versus not having access to this demographic, identity and social categorization data (i.e., color blindness).
Blind interviewing appeals more to some groups than others, may decrease sensitivity to preferential hiring based on identify and social categorization, and regard systemic and institutional racism as nonexistent. On the other hand, a multicultural approach to interviewing may result in pigeonholing candidates for certain industries, companies, departments and roles, and may increase a perceived threat among others who fear they will not be selected based on merit. Like other complex realities, there is no simple answer, which is why an intentional series of deliberate dialogues must be taken among the board and C-suite.