top of page

Evaluating Regulations on Artificial Intelligence in the Workplace

Mar 5

3 min read

0

7

0

Artificial Intelligence: West Lothian Digital Learning Team.
Artificial Intelligence: West Lothian Digital Learning Team.

Generative artificial intelligence (AI) has taken the working world by storm in recent years. While high-skilled professionals enjoy increased efficiency and decreased burnout, lower-skilled workers fear layoffs and subsequently, more competition amid an already cut-throat job market. So far, discussions on the potential impacts of AI are only focused on the extremes — either boosting employee productivity and taking humanity to new heights, or dehumanizing our workforce. 


The thought of human jobs being replaced by computers had seemed futuristic for many years, but in the past year, it has become a reality Americans are forced to come to grips with. This past month, Workday outlined plans to lay off 1,750 employees — approximately 8.5% of their workforce — to invest more heavily in AI and accelerate international growth. They intend to leverage AI to “alleviate talent gaps, streamline workflows, and automate their operational processes.” Furthermore, the AI revolution is expected to cause upwards of 200,000 layoffs on Wall Street, which could lead AI to automate 54% of banking positions. On the other hand, using generative AI to automate repetitive tasks can help workers focus on more meaningful projects, which research suggests can boost productivity and engagement. According to studies done by MIT Sloan, AI can improve a skilled worker’s performance by up to 40%, as long as the workers are using their cognitive abilities to work with AI rather than having it replace human effort entirely. The conflicting implications of AI in the workplace leaves the American workforce in a state of ambiguity, unsure of how to work with or against it.


As a student of labor relations, I recognize the pros and cons of generative AI, both on a personal level and a wider scale. For example, I’ve personally used generative AI to write code in new languages, create practice tests during a busy exam season, and prepare for interviews. In balancing a packed schedule, the efficient and personalized experience that generative AI provides is priceless. However, like many of my peers, I also worry about how AI will affect healthcare, education, and last, but definitely not least, our future job prospects. According to an analysis done by PWC, there will be a 27% slower hiring growth rate for roles that are most exposed to AI. There will also be a 25% higher net change in skills required for AI-exposed occupations, making the job market much more unpredictable, especially for young talent.

 

But the consequences aren’t limited to automation - blindly trusting AI’s sometimes unreliable output for internal operations can lead to grave issues. Large language models have the potential to produce biased outcomes, given the ‘regurgitating’ nature of the platforms. For example, a random sampling analysis of ChatGPT simulations of physicians handling life-or-death situations - given hypothetical resource constraints - found that ChatGPT’s recommended solutions demonstrated multiple gender, age, and race biases. The biases in AI’s outputs can result in discriminatory hiring outcomes as well, as companies have begun to incorporate AI into resume reviews, first-round interview screenings, and cover letter reviews. Amazon’s AI-based hiring model had “a propensity to favor male candidates for technical roles”, having learned from the human-based dataset it was given, which reflected the long-standing gender gap within Amazon’s technical departments and the broader technology field. Using AI to streamline decision-making processes calls for serious consideration of its ethical implications, which are often unfortunately overlooked.


While it may seem that we cannot prevent the “AI revolution”, we can take steps to mitigate its impacts on the workforce. States like New York and Massachusetts have recently passed laws that require companies to disclose their use of AI in the hiring processes. Similarly, California, Illinois, and Oklahoma have passed laws requiring that companies assess and report the impacts of their AI-powered decision-making tools. In the wake of these states’ new policies, workers from other states who are concerned about AI impacting their job prospects should encourage their state senators, assembly members, and federal Congressional representatives to begin implementing similar regulations. As Cornell students, we can push our representatives to implement further regulations on AI in New York State. Potential policy improvements could include requiring impact reports like California or Illinois, or amending our current regulations to disclose when AI is used for internal human resources operations like scheduling or compensation. By informing workers of the systems that govern their employment, they can be more empowered to vet and question them. Aside from pushing for formal regulation, those concerned about the negative effects of AI should be empowered to educate themselves on the systems and encourage them to be used for good. With a healthy combination of public policy regulation and controlled, ethical, and informed intentions on the use of AI, AI can empower us rather than replace us.

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page