Article: AI in the workplace: Balancing benefits & management

Learning Technology

AI in the workplace: Balancing benefits & management

Building transparency and trust can enable employees to harness generative AI applications safely without exposing sensitive data.
AI in the workplace: Balancing benefits & management

It's one hour before the end of your shift when a new request comes in from your supervisor asking for the employee performance reports before the day ends. The short deadline leaves little room to write a professional report, so you share your employees’ data with a generative AI platform and let it write a report on your behalf. While you submitted the report in time, your employees’ information is now fully accessible to anyone who uses the platform.

Any information shared with large language models (LLMs) like ChatGPT and Bard, may be retained and used for training and improving the model. Who can forget when employees at a semiconductor giant accidentally shared confidential code while using ChatGPT for help at work?

This risk of exposing data via generative AI applications has opened a new cybersecurity front and it has pushed governments across the Southeast Asia region and the world to adopt policies that can encourage responsible usage. For example, Singapore's Personal Data Protection Commission (PDPC) created the Model AI Governance Framework, which outlined good data accountability practices and encouraged transparent communications. There have been similar initiatives across the region such as in Indonesia, Thailand, Malaysia, and Vietnam.

However, organisations themselves have a role to play in keeping data secure, especially when their employees are engaging with generative AI applications that are outside of their IT teams' purview. The trend of employees using AI applications outside of the regulation of their organisations’ IT and security teams has become so rampant that it has been coined “Shadow AI.” Shadow AI can refer to the use of any unregulated application or website using an LLM, these range from chatbots like ChatGPT to programs like AlphaCode.

Bringing shadow AI to light

Privacy concerns have prompted some organisations to enforce a ban on LLMs in the workplace. Nevertheless, this prohibition may have limited effectiveness since these platforms are accessible on personal mobile devices as well. Employees who are determined to use LLMs for work purposes will often find ways to do so despite company protocol. Not to mention, banning these applications outright also removes the potential benefits these applications offer individuals and organisations alike such as increased productivity, creativity, and an overall better employee experience. 

Keeping data under wraps with AI governance

Simply banning generative AI altogether can create unneeded friction for employees and leave organisations at a competitive disadvantage. For instance, AI can rapidly sift through vast amounts of data. Organisations that ban generative AI miss out on the advantages of efficient data analysis for well-informed decision-making. This, in turn, results in employees spending valuable hours manually sifting through data, diverting their efforts from more high-level tasks. To protect customers’ and stakeholders' interests while ensuring that their employees can continue to leverage generative AI, organisations need to implement solutions and measures that can provide greater visibility and control over AI use.

Data loss prevention (DLP) solutions, which prevent the extraction of data through specific patterns, might be able to assist organisations in this endeavour. However, these programs require constant maintenance and do not provide contextual information that can help users prevent future data exposure. Instead, organisations should rely on a more proactive solution that can guide users on using AI safely and responsibly.

Helping employees across the responsible AI path

One of the best solutions for organisations is to implement digital adoption platforms (DAP), which sit on top of the generative AI platforms, to support users in using the technology safely. Through segmented pop-up alerts, automation, and on-screen guidance, DAPs can remind employees of company policies regarding specific websites or applications before errors are made. They can also hide specific functionality and reroute employees to safer alternative options. The platform can also be a key asset for IT leadership by providing full visibility into how employees are using generative AI applications so that they can decide on the best course of action to optimise their usage of these applications while minimising risks.  Leadership teams also need to be educated on the latest developments in AI technologies, including potential dangers and benefits. This way, they can establish responsible AI policies and optimise resource allocation. Additionally, informed managers can also enable a smarter AI adoption approach that balances business objectives with compliance regulations, which, in turn, minimises potential risks.

Finally, organisations should host workshops and discussions that enable staff at all levels to communicate insights, risks, and best practices related to generative AI. This information is crucial to help them learn about new AI technologies and benefits. Additionally, it also enables employees to make smarter decisions that allow them to meet their company's strategic objectives.

AI represents a game-changer in boosting employees' creativity and efficiency. With these capabilities, organisations can gain a competitive advantage against their rivals. Harnessing these benefits, however, requires organisations to stand with their employees in giving them the skills and knowledge necessary to keep data protected. With DAP guidance, education, and open communication, AI can be a crucial asset in unlocking future success for an organisation.

Read full story

Topics: Learning Technology, #ArtificialIntelligence, #EmployeeExperience

Did you find this story helpful?

Author

QUICK POLL

How do you envision AI transforming your work?

Your opinion matters: Tell us how we're doing this quarter!

01
10
Selected Score :