Evolving technology in the workplace - employers need to mitigate the risks
- Judith Griessel
- Sep 2
- 4 min read

When your employees are also content creators or regard themselves as influencers, it is very easy to blur the line between “a day in the life” and security risks for your organisation. Such as the airport employee who posted her story on TikTok, including typing in her pin, wearing her security badge, showing airport operations and the like. The algorithms loved it – her employer did not.
Which new technology-related risks are employers facing at this moment?
Employers must keep up with new technologies and with how easy it is for anyone to just publish stuff – anywhere and everywhere. Especially since employees are not necessarily thinking about the potential risks and consequences when they are chasing ‘likes’. We have been alerted to the trend of teachers making videos in their classrooms for their private socials, but without ensuring that the necessary consent has been obtained from colleagues or from parents of minor children who might be in the video. Definitely a no-no under POPIA - placing the school at risk, and holding the teacher legally liable as well.
Another topic getting a lot of attention recently, is the use of emojis in workplace-related communications. This is also fraught with risk, because it is so easy to misinterpret someone’s intentions. People assign different meanings to symbols – for example based on cultural and generational differences. The courts have also recognised that something like a ‘thumbs-up’ could be interpreted as consent for legal purposes. It all depends on the context. So, using a hand-emoji in a different colour than the yellow default one, may mean something. So could the colour of a heart-emoji. Or sending a cocktail emoji to an addict……
And the hot topic of the moment – Artificial Intelligence! We all know by now that AI can hallucinate and generate misleading content – so checking and verifying the outputs is a must. It can also expose personal or confidential information if a public AI-tool is used and employees upload or reference proprietary information, client details, private information of co-workers, or share private log-in credentials with the AI. Employees may be tempted to use their favourite AI-tool to help them work faster, but they should be aware of the risks and understand that information may inadvertently become public.
Also, take note of algorithmic biases that might be embedded in AI-tools. If such tools are used in recruitment, performance evaluation, or other types of decision-making, without adequate human oversight, these biases might be amplified and result in discriminatory practices. It will not be a defence to say, “I used AI, leave me alone”!
What should employers do?
If employers want to mitigate risks such as workplace harassment claims, cyber-bullying, reputational harm, security compromises and other potential fall-outs, they need to take action. Banning content creation or the use of emojis or AI by employees, is not realistic, but it needs to be managed. Workplace policies and contracts need to be updated – of course – but education and regular discussions on these topics, are equally important. Explain to them what NOT to show/do, and why. Discuss or train them (regularly) on the legal, ethical and operational risks of evolving technologies.
As a starting point, employers need to put in guardrails. Now. Technology advances too fast to be able to keep up with pinpointing all possible risks in workplace rules or policies (although your IT department will have a lot more to say about this!), but from an HR perspective, broad guardrails for acceptable use are essential. This means updating employment contracts and policies.
New clauses could include:
Defining permissible platforms / AI-tools, or making it clear that certain protocols must be followed to evaluate the security and reliability of the tool, or that authorisation must be obtained, before using the tool.
Prohibiting specific conduct, such as the entry of personal, confidential or proprietary data (including IP) into unapproved AI systems.
Verification obligations by the user: to verify intellectual property when interacting with AI; and of course, to verify all outputs before using it. AI results always need to be tempered by human expertise, ethics and value judgment.
Stipulating that employees should clearly separate between work-related and private use of technology, especially if they are permitted to use the same device for both. Ensure that they do not side-step company IT protocols by using AI on their personal device for work purposes.
Limiting the use of emojis by providing guidance in terms of what is acceptable or not. For example – no emojis to be used in formal emails or when communicating with external parties; and indicating how (not) to use emojis on internal/external WhatsApp groups.
In addition, from a governance and compliance perspective, organisations should also ensure that they have secure IT systems that can maintain audit trails of AI use. This may become necessary for internal investigations, compliance audits and potential litigation.
Remember also, that the updated POPIA Regulations make it compulsory for Information Officers to ensure that their organisation’s compliance framework is continuously updated.
Conclusion
As the saying goes - the only constant in life is change. Employers and business leaders have their work cut out for them to keep up with technology. AI is the new electricity - and we all know how that changed the world........
Contact us if we can help your business.
© Judith Griessel







