Get the latest digest on business and technology trends straight to your inbox.
• AI can help businesses 1) raise efficiency and accuracy, 2) gain deeper insights and make better decisions, 3) offer a personalised customer experience, 4) optimise their other tech investments, and 5) strengthen cybersecurity.
• But experts fear AI’s “profound risks to society and humanity,” and businesses may face legal, regulatory, operational, and security-related issues.
• To mitigate these risks and make sure things go smoothly, organisations must 1) create a strong AI strategy, 2) practice change management, 3) understand the tech’s limits and strengths, 4) have a robust IT ecosystem, 5) take AI’s maintenance needs and costs seriously, and 6) stay up to date on regulations.
From the birth of artificial intelligence (AI) as a field at the 1965 Dartmouth summer workshop,1 to the March 2023 launch of GPT-4, the most advanced iteration of the large language model (LLM) powering the popular ChatGPT chatbot, AI has definitely come a long way.
Exponential strides over the past few years and its current, perhaps unprecedented, level of sophistication tell us that we live in a fast-changing, increasingly digital world. This new environment presents unique possibilities but also some divisive challenges, not just for business but for society as a whole.
AI’s present applications are varied: aside from chatbots, it powers self-driving cars, digital assistants like Amazon’s Alexa and Microsoft’s Bing, precision agriculture, e-commerce recommendation engines, financial fraud detection and risk assessment, wearable health-tracking devices, robot-assisted surgeries, drug discovery, and social media analytics.
AI tools that generate images, videos, music, and code–like Midjourney, Stable Diffusion, Adobe Firefly, Canva’s text-to-image feature, Google’s MusicLM, and OpenAI Codex–are giving artists and coders a run for their money and stirring up a debate about whether the technology is a boon or a bane for creators.
Researchers in Japan2 have even successfully used AI to analyse people’s brain activity and recreate images they saw.
Companies around the world are also reaping the benefits of incorporating AI into their business.
Enterprise AI, or the use of AI in business to promote digital transformation, is a growing global market, valued at USD 11.1 billion in 2021 and expected to reach USD 64.5 billion by 2028.3
These investments, on top of other digital transformation investments, are a must in today’s competitive digital business landscape. Gartner4 thinks generative AI, in particular, has several promising enterprise use cases: drug design, material science, chip design, creating synthetic data, and generative design of various parts for industries including manufacturing, aerospace and defence, and automotive.
Automation may be one of the first few things that companies will experiment with when adopting AI tools for business, but the technology offers advantages that go beyond just efficiency gains. It allows for smarter strategies, deeper insights, hyper-personalised customer experiences5 delivered at scale, the optimisation of other enterprise technology investments like IoT, and stronger cyber defences.
As more and more companies digitalise and give AI a shot, and as AI models become more advanced, these gains may be just what you need to remain competitive today.
● Raising efficiency and reducing errors through automation
● Producing richer insights and enhancing decision-making with the help of data
● Creating a better customer experience by enabling personalisation, and delivering it at scale
● Supercharging other enterprise technology investments, like IoT, by having the capacity required to analyse the mountain of data produced by these connected devices
● Bolstering cybersecurity by automating threat detection and blocking attacks without human intervention, and helping you stay ahead of hackers who also may be using AI
As with most novel things, however, this cutting-edge technology comes with risks and unknowns.
Some of the loudest discussions revolve around a possible loss of control over the technology, leading to what experts are calling existential and “profound risks to society and humanity.”6
There are worries about job loss, with a study7 by researchers from OpenAI, the San Francisco-based company behind ChatGPT, estimating that “around 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted.”
The value of art8 in the age of generative AI9 is under scrutiny, as platforms like the previously mentioned Midjourney and MusicLM (recently made publicly available10 after a January 2023 preview) allow just about anyone to create visual art and music.
The use of AI for disinformation11 is another concern, with the creation of fake photos, deep-fake videos, and even fake voices now at the fingertips of scammers and others with malintent.
In response, governments around the world are racing to draft regulations.12
Early believers in the technology are also sounding the alarm: two weeks after the launch of GPT-4, a group of more than 1,000 experts–including AI pioneer Yoshua Bengio, Tesla’s Elon Musk, and Apple co-founder Steve Wozniak–called on AI labs to pause the training of advanced AI systems for six months.13 Barely two months later, Geoffrey Hinton, the man considered to be the “godfather of AI” and who shares a Turing Award with Bengio for their work on neural networks, quit Google and warned of the growing dangers posed by the technology.14
Researchers are also using AI to protect against AI. For instance, a team at the University of Chicago developed Glaze,15 an app that prevents artwork published online from being read and mimicked by generative AI models.
The risks for enterprises can be legal, regulatory, operational,16 and security-related.17
Companies may face legal risks posed by AI’s potential for bias.18 AI models are trained on large data sets. When historical data reflects a particular bias, AI’s output may perpetuate that. An AI hiring system, for instance, may produce recommendations that repeat past biased hiring practices and discriminate against particular groups.
Other legal implications involve questions of intellectual property ownership: Are copyright laws being violated when AI models learn from copyrighted material? Who owns the output if no human creator was involved, and can this output be copyrighted?
Potential privacy law violations pose another significant risk. Enterprises that handle personally identifiable information must comply with laws like the General Data Protection Regulation in Europe,19 or Singapore’s Personal Data Protection Act.20 Using third-party AI applications may compromise the privacy of this data, exposing the company to the risk of data breaches and legal, financial, and reputational damage.
Generative AI has also been found to churn out inaccurate information, and companies that share content based on this may face legal repercussions, including libel.
Unexpected outcomes of using AI systems, and potential user error arising from the handling of the system by humans, pose operational risks as well. This is why understanding what your technology can do, and training employees and other users appropriately, is critical.
Lastly, while not unique to AI, companies may also have to deal with a lack of in-house skills to maintain and optimise the use of AI platforms.21 Implementation times can also be lengthy, and issues related to the interoperability of AI systems with other systems could arise later on.
Beyond the hype and despite the risks, there is still a lot to gain from incorporating AI into your business. Here are some best practices to help you in your AI journey.
● Create a strong AI strategy
Set a clear vision, see how AI can differentiate you from your competitors, and communicate and be transparent about this strategy with employees and other relevant stakeholders.
Let your business strategy inform your AI strategy, not the other way around. To do this, Deloitte22 recommends having senior leaders develop the strategy in partnership with your data scientists and IT team, rather than asking IT to build the strategy from the ground up. This way, your move towards AI will stem from real business goals, not just an urge to have the latest tools.
● Practice change management
Make sure your deployment goes smoothly by managing the different organisational changes that come with introducing a new system. The process may vary across companies and solutions, but a basic change management plan for AI deployment23 may entail:
1. Finding potential roadblocks to adoption before launching
2. Communicating the strategy clearly and getting stakeholder buy-in
3. Assigning tasks to specific people
4. Training your team
5. Encouraging the use of the tool to achieve high adoption rates
6. Ensuring long-term success by embedding the process into your culture and workflow
● Understand AI’s limits and strengths
AI is not a homogenous technology. Most of the buzz has been around generative AI recently, but there are plenty of different AI solutions available for enterprises, including natural language processing, machine learning, and computer vision. Each will have its own pros and cons, so do a deep dive before you commit. Remember: let your business goals drive your AI strategy.
● Make sure your IT ecosystem is robust enough for AI
Powerful AI systems require a lot of support. Check if your current ecosystem has what it takes to let you fully utilise AI, or if additional investments are needed.
5G networks, such as Singtel Paragon, for instance, may be critical in getting your AI systems to work seamlessly. 5G can help maximise AI investments by giving you the connectivity required to truly leverage intelligence at the edge, support low latency applications, and ensure end-to-end orchestration.
● Have a rigorous AI trust, risk, and security management (AI TRiSM) approach
By 2026, the AI models of organisations that operationalise AI trust, transparency, and security will see 50% better user acceptance, adoption, and business goals, says Gartner.24 They define AI TRiSM as “a framework that supports AI model governance, trustworthiness, fairness, reliability, robustness, efficacy, and privacy. It includes solutions, techniques, and processes for model interpretability and explainability, privacy, model operations, and adversarial attack resistance for its customers and the enterprise.”
Help ensure your AI deployment’s success by having a strong AI TRiSM approach and crafting an organisational AI policy that addresses your legal, regulatory, operational, and security vulnerabilities.
● Stay up to date on, and align your AI strategy with, relevant laws and regulations
It’s hard to tell how quickly the AI regulatory landscape will shift. Britain, China, and the G7 are in the preliminary planning stages, the EU is drafting what could be the world’s first comprehensive laws on AI, and France, Italy, and Spain are already investigating possible data privacy breaches related to ChatGPT. Regulation and legislation will likely lag behind AI advancements, so it’s important to keep an eye on developments and be prepared to adapt or pivot accordingly.
In today’s fast-moving digital business environment, AI offers a novel way for companies to remain competitive. There are risks that must be examined with a critical eye, but, with careful planning and execution, forward-looking organisations can reap the benefits of this groundbreaking technology.
1. Stanford University, Appendix I: A Short History of AI | One Hundred Year Study on Artificial Intelligence (AI100).
2. Science, AI re-creates what people see by reading their brain scans, 2023.
3. Vantage Market Research, Enterprise Artificial Intelligence (AI) Market Size & Share to Surpass $64.5 Billion by 2028, 2023.
4. Gartner, Beyond ChatGPT: The Future of Generative AI for Enterprises, 2023.
5. Harvard Business Review, Customer Experience in the Age of AI, 2022.
6. The New York Times, What Exactly Are the Dangers Posed by AI?, 2023.
7. OpenAI, GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models, 2023.
8. The New York Times, AI-Generated Art Won a Prize. Artists Aren’t Happy, 2022.
9. Waxy, Opening the Pandora's Box of AI Art, 2022.
10. Google, How to try MusicLM from Google’s AI Test Kitchen.
11. The Conversation, AI tools are generating convincing misinformation. Engaging with them means being on high alert, 2022.
12. Al Jazeera, Governments race to regulate artificial intelligence tools, 2023.
13. Future of Life Institute, Pause Giant AI Experiments: An Open Letter, 2023.
14. MIT Technology Review, Geoffrey Hinton tells us why he’s now scared of the tech he helped build, 2023.
15. Glaze
16. LegalTech News, Key Legal and Operational Risks for Enterprise AI, 2023.
17. Team8, Generative AI and ChatGPT Enterprise Risks, 2023.
18. The Wall Street Journal, Rise of AI Puts Spotlight on Bias in Algorithms - WSJ, 2023.
19. GDPR
20. PDPC | Data Protection Obligations
21. nibusinessinfo.co.uk, Risks and limitations of artificial intelligence in business, n.d.
22. Deloitte, Becoming an AI-fueled organization, 2021.
23. Salesforce Canada, The Role of Change Management When Implementing AI - Salesforce Canada Blog, 2021
24. Gartner, Safe and Effective Implementation of AI TRiSM, 2022.
Digital agenda 101: Accelerate your digital investments
View more ›
Maximising IoT business benefits with AI
View more ›
Outsmarting hackers
with AI
View more ›
Beyond the hype, today's AI is a big win for businesses
View more ›
Powering up new possibilities with 5G
View more ›
3 steps to master real-time AI with a 5G ecosystem
View more ›
Paragon
View more ›
Get the latest digest on business and technology trends straight to your inbox.
Get the latest digest on business and technology trends straight to your inbox.