top of page

Navigating the Intersection of AI and DEI (Ms. Salehpour's Recently Published Article from NALA's Quarterly Magazine)

Women with dark brown curly bob smiling and wearing pearls and a dark blue v-neck top with long sleeves standing with a back drop of a courtyard lobby with brick tiles and a metal gate

ChatGPT was released almost two years ago, capturing the attention of the public and making artificial intelligence (AI) an overnight buzzword. Many of those

who are newly intrigued with the concept of AI have failed to realize that AI technology in the form of predictive analytics was being broadly used at the enterprise level for many years before the release of ChatGPT, including in sensitive areas such as employment and health care. Business and government use of predictive analytics offerings will continue to grow exponentially. These technologies are arguably more interesting and have more potential than generative AI offerings such as ChatGPT, but they are also more concerning. This is due to the direct impact that predictive analytics technologies can have on sensitive areas such as access to education, employment, housing, insurance, financial services, health care, or government services.


Predictive analytics use artificial intelligence and machine learning to analyze historical data to predict future outcomes, offering businesses and governments powerful tools to make more informed decisions. However, growing reliance on predictive analytics raises critical legal and ethical concerns, particularly as use of these tools is increasingly leading to automated decision-making that impacts individuals. One such concern is the issue of bias, which comes into AI algorithms in a variety of ways. In addition to the inherent bias of the developers and users of the technology, the training data itself may also be embedded with historical inequalities and systemic biases that can lead to discriminatory outcomes. For example, predictive analytics tools are increasingly used to screen resumes, assess candidates’ suitability, and even predict future job performance. If such tools are trained on biased data, they may unfairly favor certain demographic groups, resulting in discrimination and violation of employment laws.


Indeed, in light of growing concerns about AI bias, several states are passing laws aimed at regulating the use of AI profiling. One of the most notable examples is the Colorado Artificial Intelligence Act. It is the first comprehensive AI legislation in the US and takes effect in February 2026. It places significant responsibilities on developers and deployers of high-risk AI systems (such as those that influence critical decisions in areas like education, employment, finance, and health care) to protect consumers from algorithmic discrimination. The responsibilities imposed on businesses creating or using AI include impact assessments, consumer transparency requirements, and mechanisms for consumers to contest adverse decisions influenced by AI. Businesses should expect to see more state- level laws and regulations in this space. Additionally, federal regulators are paying closer attention to bias issues in sensitive/ high-risk areas. For example, the Equal Employment Opportunity Commission (EEOC) has issued guidance emphasizing that employers must ensure that AI tools do not result in discriminatory practices.


For businesses interested in using AI tools and promoting diversity, equity, and inclusion (DEI), what can be done to combat these risks? Such businesses can take several proactive steps to mitigate risks associated with bias and help align their AI initiatives with their DEI goals:


1. Prioritize vendors that emphasize ethical AI practices, such as conducting regular bias audits and providing transparency about how their algorithms work.


2. Continuously track the outcomes generated by their AI tools and review the results regularly to ensure that the AI is not disproportionately affecting any group. If issues are found, take corrective action.


3. Train personnel on how AI and DEI intersect so they can recognize the potential for bias in the AI outputs and make human oversight a core part of the decision-making process.


4. Be transparent with personnel and customers on how AI tools are used in decision-making.


5. Work closely with HR, DEI personnel, and legal teams to set clear company objectives for using AI responsibly.


6. Stay informed of developing laws and regulations and be proactive in keeping compliance up to date.


Implementing these practices can allow businesses to use AI tools effectively while ensuring fair and equitable outcomes.


Morvareed Z. Salehpour is the Managing Attorney of Salehpour Legal, a law firm specializing in business and technology law and recognized as a 2024 Top 100 Bruin Business. Her firm focuses on negotiating and structuring contracts and tech transactions and providing counsel on product launches, intellectual property, data privacy, open-source issues, and emerging technologies. She has represented clients of all sizes nationwide in a variety of industries. She is a graduate of UCLA and UCLA School of Law. She is also Past-President of the Santa Monica Bar Association and Career Chair for the UCLA Alumni Westside Bruins Network. Email: msalehpour@salehpourlaw.com


Videos and content are for educational purposes only, not to provide specific legal advice.

 


コメント


Featured Posts

Recent Posts

Archive

Search By Tags

Follow Us

  • Instagram
  • Twitter
  • LinkedIn Social Icon

SALEHPOUR LEGAL

Los Angeles, CA 90025, USA

  • Instagram - White Circle
  • Twitter - White Circle
  • linkedin

©2018 by Salehpour Legal

 

Disclaimer

This Blog/Web Site is made available by the lawyer or law firm publisher for educational purposes only as well as to give you general information and a general understanding of the law, not to provide specific legal advice. By using this site you understand that there is no attorney client relationship between you and the Blog/Web Site publisher. The Blog/Web Site should not be used as a substitute for competent legal advice from a licensed professional attorney in your state.

bottom of page