AI Potential Meets ROI Pragmatism: 3 Crucial Questions Every CIO Should Ask

Eric Helmer
SVP & Chief Technology Officer
5 min read
AI Potential Meets ROI Pragmatism: 3 Crucial Questions Every CIO Should Ask

As the volume and velocity of new AI technology accelerates, the pressure on IT leaders to implement AI at any cost is all too real. But there’s a better way: Look before you leap by first asking these three key questions.

If you had to name 2023’s single-most impactful and disruptive technology, you’d need just two letters: AI. With the release of OpenAI’s ChatGPT in November 2022, we watched a tsunami of AI news and noise throughout the year. And there’s no sign of things slowing down in 2024 and beyond.

Even for technology insiders, the rapid pace of generative AI’s development and adoption across all business sectors was simply astonishing. Many organizations that considered themselves to be forward-thinking in 2022 suddenly found themselves playing catch-up in 2023. But if FOMO was a thing last year, this year we say NOMO, as in no more fear of missing out. Instead, look before you leap.

As we move forward into a new year, it’s crucial that we commit to a resolution that will help us create significant value for our shareholders, both now and in the years to come. Let’s promise ourselves that 2024 will be the year that we adopt a pragmatic approach to harnessing the vast potential of AI. To do so, we need to first ask ourselves three key questions:

Question #1: How will we use AI to meet our specific business objectives?

In this new year, the speed and scale of AI implementation will make the progress made in 2023 look stagnant. By 2025, IDC expects Global 2000 companies to devote more than 40% of their core IT budgets to AI-related activities, with worldwide AI spending predicted to exceed $500 billion by 2027. The question, then, is not whether you will shift toward more AI-influenced operations in 2024 but how — and, more importantly, why.

AI promises seemingly limitless possibilities for tech-savvy organizations — everything from sorting and analyzing vast amounts of data to improving customer service and patient care. Looking forward, it’s easy to imagine how AI could alleviate supply chain headaches and even create virtual reality training simulations.

Whatever their outward manifestation, though, effective AI activities tend to focus on two underlying objectives: expanding capabilities and eliminating waste. In other words, AI is most often used to increase what one can do and improve how it’s done to free up human and financial resources so that you can deploy them more strategically elsewhere in the organization.

As IT leaders start thinking about how to incorporate AI into their organizations, they’ll likely focus on generative AI and other advanced AI capabilities to cut down on costs, especially when it comes to mundane tasks and resource optimization. However, while cost-saving is an important consideration, it shouldn’t be the only one on the board.

Organizations without a clear vision of what they want to accomplish in 2024 will find plenty of AI bells and whistles but very little direction. AI is a tool, not a mission statement. Focusing on the general use of AI in your organization is not the same as being strategic in how it’s used. In the same way, it’s no different than what we’ve encountered in the past with other transformative technologies, such as cloud computing.

As the number of AI features continues to multiply over the next 12 months, it’s crucial that organizations take precautions now to avoid shiny object syndrome, where the potential of adopting this exciting new technology turns from distraction to detriment. To preserve the integrity of their organizations, leaders must evaluate the strategies they use to prioritize investments so that they can optimize spending in preferred technology areas to reach their business goals.

Question #2: How will we make sure that we use AI responsibly?

While AI can be a powerful tool for achieving business objectives, it can also be a disastrous liability fraught with risks, another reason why organizations should take a deliberate and pragmatic approach to AI adoption. Many people are concerned about the growth of AI. And, as many organizations discovered in 2023, any perceived misuse of AI will significantly harm brand image, regardless of the initial intention. In the public eye, there is no room for error when it comes to AI use.

It’s imperative that you and your business stakeholders carefully and regularly review procedures to ensure the ethical use of AI tools and AI-generated outputs. Even if your organization is not presently active in Europe, the EU’s forthcoming Artificial Intelligence Act should inform your AI-related policy decisions. And with a recent hearing on the oversight of AI in the United States Senate, it’s possible the American government will issue guidance as well.

At the very least, you should have clear and detailed safeguards in place to address how your organization plans to handle the following issues:

  • Protecting the privacy of customer data
  • Ensuring that proprietary information is not fed into generative AI models
  • Ensuring that AI models and outputs do not reflect bias or prejudice
  • Maintaining vigilant supervision over all AI-related activities
  • Maintaining clear reporting structures for all employees who use generative AI
  • Demonstrating transparency of AI usage to stakeholders inside and outside of the organization

As you might expect, your legal department should be deeply involved in these conversations.

Question #3: How will we make sure that our employees use AI successfully?

Ultimately, AI adoption is not just an IT issue: it is a workforce issue. Are your employees ready? If we’re being honest with ourselves, the answer is probably “not yet.” With any new technology, many companies operate within the “we bought it, so you have to use it” paradigm. This inevitably leads to poor morale and haphazard implementation, which undermine the organization’s goals. ROI quickly becomes DOA.

Organizational change often draws out strong emotions from the employees. This is especially true when dealing with powerful and disruptive technologies like generative AI. Having conversations with your workforce about AI-related activities before you implement them will go a long way toward calming their fears and making sure you can meet your objectives. Employees need to know…

  • The vision and goals behind your organization’s adoption of AI
  • How AI will augment and enhance the work they do
  • The steps you’re taking to protect them and your customers from AI misuse
  • The steps they should take to report any concerns about specific AI activities
  • How you’ll provide ongoing opportunities for them to gain the skills they need to manage/leverage AI effectively
  • Or, in some cases, how you’ll help them transition to new positions after AI makes their current roles obsolete

In other words, before you begin the process of dedicated upskilling, you should have conversations with your workforce about how you’ve redefined your business objectives (question 1), created your AI policies and procedures (question 2), and prepared for your employees’ successful adoption of AI. Doing so will make them comfortable with the initiative, which in turn will give them a sense of ownership of the technology. This gives you a far greater chance of using AI to generate real value for your organization.

Looking forward

While it seems inevitable that the adoption of AI-enabled technologies will continue to expand and accelerate in 2024, the reality is that successful organizations will need to focus more time and effort on first understanding where AI might actually provide maximum ROI for their organization. Here’s to all of us finding success through pragmaticism in this new year.

Learn more: If you’re assessing AI for ERP, here’s guidance on how best to look before you leap.