AI use cases are spreading rapidly across the workforce, but many companies don’t seem to have a policy in place to prevent workplace disasters. The Conference Board recently conducted a survey of 1100 US employees (Which you can find here) and found that that 56% of respondents are using generative AI on the job. Respondents were mainly using it to draft content, brainstorm ideas and conduct background research. Concerningly, 74% of respondents said that their organization doesn’t have an AI policy, their policy is under development or they don’t know if there is an AI policy. Without guidelines around organizational AI usage, there are consequences ranging from data vulnerability, incorrect conclusions and improper applications of the technology.
Need a couple examples? Here are a few incidents specifically involving ChatGPT:
- Samsung employees who exposed sensitive code while trying to leverage generative AI (Forbes)
- Lawyer who submitted a legal brief with six fabricated case citations generated by ChatGPT (Reuters)
- University Professor who used ChatGPT to accuse students of using ChatGPT (Washington Post)
Having a policy in place is a good start but it isn’t the only thing that needs to be done. Companies need to develop enablement plans to properly train employees on when and how they can use AI as well as ensure they understand the limitations of it.
Navigating Around the Limitations of AI
Compared to technological advances in platforms and tools of the past, generative AI is unique in that much of its potential has been identified through trial and error from end users. This makes it difficult for newer users and organizations to understand the applications of AI. The Harvard Business School released a working paper that examines the effect of GPT-4 AI augmentation on consultant performance within the Boston Consulting Group. In the study, consultants were assigned different levels of AI augmentation and sets of business-related tasks. The key points to consider are:
- Participants who relied on GPT-4 were more likely to have wrong answers on tasks where GPT-4 reliance would lead to contrasting conclusions.
- Ignoring the correctness of their answers, AI augmented participants received a boost in the quality of their work and human determined grades.
- Work from participants using GPT-4 had less variety and diversity compared to those who were not.
There are tasks AI can help us with and tasks that it can’t, but it’s important to understand the benefits as well as the drawbacks. The best way to understand its limitations are to treat AI like an intern or an apprentice. They can provide help, but you still own the final product and need to double check their work to understand their ability.
Wrapping Things Up
You or your employees may already be employing AI models to aid with writing and creativity which means your policies and processes need to evolve. A seemingly innocuous program you decide to use, like a resume screener or ChatGPT, has its benefits but also potential unforeseen consequences.
Interested in hearing more? Stay tuned for more AI-related content!