For a few months whenever someone mentioned AI I couldn’t help but make a Skynet joke.
I apologize for anyone who had to endure those months because I was relentless.
At some point I had to stop because I couldn’t go a few hours without hearing about AI. And I started to realize that I wasn’t taking things seriously when it came to AI.
Since then I’ve written a few newsletters, hosted an HR Therapy about AI and its impact on work, polled my network on tools and read research about what we need to consider as HR leaders.
I sought out two experts to contribute to the newsletter this week to tell us more about AI and what we need to be thinking about!
⭐ B.McKensie Mack and Ellen Pao co-authored AI at Work: What Executive Leaders Need to Know and Do.
📖Read more below about their thoughts on why we need to think about AI (beyond my silly Skynet jokes) and what HR leaders should consider.
Why you need to think about AI:
AI is already embedded in the products we use every day: our laptops, our phones (hello, Siri), our homes (hi, Alexa). The current wave of marketing products as AI-based is broad and feels omnipresent. Investors are putting pressure on executives to adopt and build AI, even if they’re not really sure what it means.
As HR executives, you have an opportunity to help make this process less random, more successful, and more ethical. AI is impacting the HR function directly in many ways:
- Worker displacement: AI tools are designed and intended to replace workers. Figuring out in advance what a tool’s impact is on the workforce can allow you to plan and retrain and move workers to other roles.
- AI involves training: Many companies are training their employees on AI. HR has a role in choosing who is doing the training and what is covered, making sure that employees are using the same terms. Training should also cover the process for using and/or building AI.
- Employee recruitment and retention. Recruiting and bridging worker perspectives
- AI benefit tools can cause harm: Benefits like financial tools and wellness tools can be as harmful as they are helpful. As Dr. Tamara Nopper, a sociologist, writer, and educator, has pointed out how benefit tools can collect data and push decisions onto employees in ways that harm them. They also often have negative impacts disproportionately on lower-income or otherwise marginalized workers.
- Privacy and data collection: The collection of data about employees as in the tools above and/or customers can harm privacy. Often it is unclear how the data will be used or further disseminated, and often the data contains private information; some AI health tools do not follow data protections like HIPAA.
- AI biases can harm workers and employees. Harassment and harm can disproportionately impact women and especially women of color, which should be a consideration when using generative AI.
Doing it right means understanding these HR specific impacts, as well as other ethical, environmental, and business impacts.
A thoughtful AI strategy incorporates HR because it should include employee input, workforce training, transparent policies, and informed consent.
Get involved because it’s everyone’s job and especially because HR has a lot to offer here.
Ellen K. Pao is an investor and advocate. She is cofounder and CEO of the award-winning diversity and inclusion nonprofit Project Include. At reddit, she was the first tech CEO to ban revenge porn, unauthorized nude photos, and online harassment. She has also worked in venture capital, tech companies, and in law. She is author of the book “Reset: My Fight for Inclusion and Lasting Change.” Her writing has appeared in The Washington Post, The Los Angeles Times, The New York Times, Time, WIRED, and The Hollywood Reporter. She earned an electrical engineering degree from Princeton, and law and business degrees from Harvard. Her efforts to call attention to discrimination issues have led to the term the “Pao effect.”
What should HR leaders consider when it comes to AI:
**Me getting ready to research the T+C of a new AI tool**
When I was a kid, I was curious about how technology worked. Whenever my mom bought a new gadget (usually it was something pretty mundane like a remote because ours stopped working for some reason or some new appliance that was a step up from a toaster), I was prepared for her to hand it to me and say, “Okay, now learn how to use this and then let me know how it works.” I would quickly throw the directions to the bottom of the box and non-destructively test how it functioned until I learned everything I needed to know. I was popular in the family as the techy person who never read directions but could figure out how almost anything worked. It was fun, and I believe, in part, it made me the analytical person I am today. But when you grow up, one of the things you learn is that there is a time and a place for reading the directions to understand why something has been made and to what end. This is especially true for new AI technologies marketed to HR leaders as precisely what they need to increase productivity and decrease administrative turmoil. For those folks who sigh at the mention of reading the fine print of a digital service with a magnifying glass, it’s essential to recognize the difference between Luddite reasoning (“just throw all the tech away”) and healthy skepticism (“hmmm, I wonder what happens to my information on ChatGPT once I’ve hit enter on a prompt”).
Here are three things you can ask yourself when you’re considering wide-scale adoption of a new AI tool:
JOIN 130K+ HR LEADERS
Get insights, learnings, and advice on how to build companies and cultures that people actually love.
No spam. Unsubscribe any time.
1) With the exception of love, you can’t get something for nothing.
Even then, some people will say that love, too, depending on who it’s with, is far too expensive. Humans tend towards a halo bias when a new and exciting tool is introduced to the market. That means that we look at a tool through a lens of perfectionism and imagine the ideal benefits it can bring us without considering what the technology can do now, how it works, and how the company that has produced it actually makes money. For example, I scroll through countless TikTok videos about creating the perfect homemade meals for your pups from scratch. Then, I see an ad. The ad is for a new “free” game that helps you practice your time management skills by solving puzzles. The game looks fun, and I am even happier because there is no cost to download it. But then, two months in, I figure out the actual business model after I’ve spent over $200 buying tokens to do who knows what for whatever purpose.
The business model that has generated billions of dollars of investment in big AI is not making our lives easier; it’s accessing our data. Suppose HR leaders adopt a new AI tool that will be used organization-wide.
In that case, we need Responsible AI policies that 1) clearly communicate how employees are expected to use AI tools at work and 2) provide required education to staff that informs them of the digital privacy and bias-related risks associated with the tools they are using.
2) Who grew up hearing from a parent, “So if your friend did [insert questionable activity], would you do that too?”
At least once a week, I come across the product adoption diagram on Linkedin. I’m sure you know the bell curve seen around the world that shows consumer behavior in stages of product adoption. Well, in April of this year, the NY Times reported that the end of the AI boom was over. Over the past three years, 26,000 AI and machine-learning start-ups have received over $330 billion in funding, but these startups that focus on AI-enabled tools for work are being funded far quicker than they are being adopted at work. Did you know that according to the U.S. Census Bureau, in November 2023, only 3.8% of businesses reported using AI to produce goods and services? Reportedly, part of this hesitation concerns hallucinations, a term that describes what happens when an AI tool gives inaccurate information in response to a question. Others worry about copyright infringement because so many of these tools pull and store content from the internet with little to no consent from the writers and artists who penned them first. HR leaders should know that using new AI tools to produce content, write reports, and explore case law is very risky and, without the proper training and education, could lead to litigation troubles for a company or organization in the future.
3) Do you know what you don’t know?
Last year, Gallup asked Chief Human Resource Officers how frequently their employees used AI to do their jobs. 44% said they did not know.
Knowing how frequently and why employees use AI at work is important for many reasons. Not knowing could mean we enable staff to use AI tools to accelerate harassing and marginalizing workplace behavior. Employees could also be using an AI tool every day for hours at a time. They have no malicious intent towards anyone and are careful about the information they share. However, they are potentially unaware that ChatGPT consumes a 16-ounce water bottle for every ten prompts they put into the tool. Companies and organizations can better understand their environmental impact if they know what employees use AI for and how often. This could also be very helpful when evaluating tools annually or biannually for their efficacy, ROI, and impact on the business or organization’s intended outcomes.
B. McKensie Mack is the Founder and Chief Purpose Officer of MMG Earth, an award-winning global research and change management firm that builds social, technological, and environmental transformations to better people, culture, and society for a world that is relevant, relatable, redeemable, and risk-ready. McKensie envisions a future where the redistribution of wealth and power becomes a daily reality. A trilingual tech enthusiast who is currently pursuing a double Masters in Finance and Economics with a focus on human behavior, financial psychology, and social transformation, McKensie’s work has been featured in Teen Vogue, NowThis, Fortune, Business Insider, Fast Company, TechCrunch, GLAAD, and many other platforms for their innovative approach to building new worlds where everyone gets to be treated well.
On the season finale…
Two more newsletters left in the year… AND I PROMISE TO DELIVER SOME FUN.
Of course I have some fun holiday themed newsletters coming up like gifts any HR team wants or my 2025 hopes and dreams.
Till then! TA TA