New possibilities in performance reporting are emerging due to artificial intelligence (AI), and among its compelling applications are predictive analytics and natural language processing (NLP).
The education technology company DeepLearning.AI describes NLP as a discipline that enables machines to understand, interpret, and generate human language. NLP can also analyze unstructured data from diverse sources. On the other hand, predictive analytics, according to IBM, uses historical data, statistical modeling, data mining techniques, and machine learning to make predictions about future outcomes.
Case Study: AI-Powered Performance Reporting in Healthcare
In its case study about UHS, Nuance stated that Dragon Medical One allows doctors to verbally record patient updates, current illness history, and treatment plans directly into their electronic health record (EHR) from almost any location. Using software like Microsoft Power BI, Dragon Medical One can also provide detailed insights into performance metrics at various levels. The reports include productivity forecasts, dictation quality data, and industry-wide comparisons.
Nuance pointed out the growing need for efficient documentation in healthcare as doctors are now being evaluated on quality metrics, which are publicly reported to the government. The software company also explained that the scores of physicians and hospitals are determined based on the accuracy of the documentation of patient conditions and the care provided. “Physicians, however, don’t always give themselves credit for the real medical complexity of the patient because of the extra time it takes to fully document it,” the report states.
In light of this, Nuance emphasized that CAPD aims to strengthen human intelligence by providing automatic suggestions to the doctor during patient care, but only when there is an alternate diagnosis or additional medical data that needs to be taken into account. This fully integrated system also allows UHS to analyze patient interactions in real time through the use of NLP.
Results of UHS’ initiatives show that there was a 12% increase in the case mix index (CMI) when physicians agreed with the CAPD clarifications and updated patient documentation. The healthcare AI system also improved the identification of “extreme” cases of disease severity by 36% and mortality risk by 24%. In addition, they recorded a 69% reduction in transcription costs year over year, resulting in $3 million in savings.
Next Steps
Companies planning to use AI in performance reporting can start by identifying areas in their operations where unstructured data is prevalent and manual processes are time-consuming. Next, it is important to develop an AI strategy that aligns with the company’s objectives.
Moreover, organizations should consider forming partnerships with AI solution providers due to their specialized expertise, experience, and ability to provide customized solutions more quickly and cost-effectively than developing in-house capabilities from scratch. Lastly, companies should invest in training their staff to work alongside AI technologies to cultivate a culture of innovation and continuous improvement.
While integrating AI into performance reporting is promising, it requires alignment with organizational objectives and flexibility from stakeholders. Click here if you’re interested in more practical applications of AI in strategy and performance management.
**********
Editor’s Note: This article was written with the help of Francesco Colamarino, a former Management Consultant at The KPI Institute.
Artificial intelligence (AI) has emerged as a transformative force in corporate strategic management, fundamentally altering the way companies make strategic decisions. AI is crucial in driving innovation even in the face of dynamic business environments and data abundance.
The integration of AI into corporate strategic management offers a myriad of benefits for businesses seeking to navigate the complexities of the modern market, namely:
Data-driven decision-making: AI empowers companies to transform raw data into actionable insights to identify market trends, assess customer preferences, and predict future outcomes more accurately. AI supports data-driven strategy, leading to better resource allocation, risk mitigation, and operational effectiveness. For instance, a company can leverage AI predictive analytics capabilities to forecast future revenue, competitive threats, the likelihood of expansions succeeding, and other core strategic considerations years in advance.
Enhanced strategic planning: AI’s capabilities extend beyond data analysis to encompass strategic planning and scenario modeling. AI-powered tools can simulate tens of thousands of realistic scenarios per minute, allowing companies to evaluate the potential impact of strategic decisions and identify potential risks and opportunities before committing to major investments. Maersk, for example, uses cutting-edge AI algorithms to revolutionize its container shipping operations. These algorithms optimize vessel routes for efficiency, predict equipment maintenance needs to minimize downtime, and provide real-time insights into cargo location and status, ensuring unparalleled transparency and efficiency.
Customer-centric strategies: AI plays a pivotal role in understanding and anticipating customer needs, enabling companies to develop customer-centric strategies that foster long-term customer loyalty and enhance brand reputation. AI-supported tools can analyze customer behavior, preferences, and feedback; thus, AI can provide valuable insights to personalize marketing campaigns, improve product offerings, and optimize customer service experiences. For instance, H&M Group uses AI to create personalized shopping experiences, optimize product offerings, and improve customer satisfaction by analyzing customer data and preferences.
Competitive advantage: AI adoption provides companies with a competitive advantage in a rapidly evolving market. AI-driven strategies enable organizations to adapt quickly to changing market dynamics. By leveraging AI’s capabilities, corporations can outstand competition and establish themselves as leaders in their respective industries. Unilever, for example, leverages a system to analyze sales data, marketing campaigns, economic trends, and weather patterns to predict future demand more accurately. This enables the company to optimize production planning, reduce waste, and improve profitability.
AI Challenges Within Corporate Strategic Management
While AI presents immense opportunities, it is crucial to address the ethical considerations surrounding its implementation, such as the potential for AI to perpetuate biases and discrimination. AI algorithms can be trained on biased data sets, leading to partial decision-making and deeper inequalities. Transparency and accountability are other ethical concerns in AI decision-making as it is important to foster trust and understanding of how decisions are made. This is to ensure that everyone involved can act on a just and well-informed strategy.
Successful AI integration also requires a cultural shift within organizations. Companies need to develop training and educational programs to equip employees with the needed skills and to work with AI systems and capabilities in harmony. This should improve communication and collaboration, leading to better alignment with corporate strategic objectives.
Companies that embrace and adapt AI strategically will be well-positioned to navigate the complexities of the modern market and achieve sustainable competitive advantage. For effective adoption of AI capabilities, companies need to develop transparent and accountable AI systems and establish clear ethical guidelines for AI use. Additionally, companies need to engage in ongoing dialogues with stakeholders to build trust and ensure that AI is used responsibly and ethically.
**********
Editor’s Note: This article was first published on February 12, 2024 and last updated on September 17, 2024.
Natural language processing (NLP) is a game-changer in small and medium-sized businesses (SMB) lending. SMB lending is widely considered a slow, protracted process that frustrates both borrowers and lenders. Unlike larger corporations, most SMBs are informal businesses with limited financial records. This requires borrowers to submit a plethora of both digital and paper- based legal documents, collateral deeds, financial reports, and even business plans—all of which need to be verified and analyzed meticulously to determine the lender’s credit worthiness. This makes it difficult for SMBs to access the much-needed financial products in a timely manner as the end-to-end loan process would take on average between three to six weeks to close.
To be profitable, the sector must also rely on the volume of borrowers while also adhering to tight regulations set by the central bank. This effort takes a lot of resources, especially manpower. With the addition of risk due to fraud and loan defaults, there is a need for a system that assists lenders simplifying such tedious processes while also maintaining overall credit quality, measured by KPIs such as % Loan delinquency and % Bank non performing loan and gross loan.
Enter NLP, a form of artificial intelligence (AI) that allows computers to understand both spoken and written human language. NLP has a lot of potential due to its ability to automatically “read” and extract useful information from both structured data (ie. sales reports) and unstructured data (ie. social media data). According to a survey in 2023, NLP has been widely used for data recognition and extraction, human intention classification (ie.g. chatbots) and natural language generation (ie.g. ChatGPT). These solutions utilizations offer organizations increased in cost efficiency through improvements in key performance indicators (KPIs) such as % Process efficiency and # Process completion time.
NLP has the potential to increase a bank’s business volume by increasing the number of loan applications and decreasing the time it takes to process each applications—measured using KPIs such as # New loan inquiries, # Process completion time, and % Process efficiency ratio. Its main role in the SMB lending process is automating menial tasks, including data extraction from many paper-based documents.
Data points such as business history, revenue, expenses, and liabilities can be simultaneously extracted and validated from borrowers’ identification, bank statement, financial report, and business plans, reducing the effort it takes to go through these documents manually. These improvements can be measured using KPIs such as # Report processing time. In addition, NLP can also help streamline compliance and legal review processes as it can scan legal documents to ensure that they comply with regulatory requirements or identify any documents that need to be reassessed by lending officers.
NLP’s ability to “read” and analyze unstructured data can also enhance the credit risk assessment process. Information from news articles, social media posts, and financial news can be an additional layer of analysis that provides novel insights that traditional financial metrics might miss, such as economic and financial sentiment. Data-driven pricing analysis can also ensure that lenders recommend the most competitive interest rates to potential borrowers.
NLP can also help integrate data from various sources, including emails, voice transcripts, and other communication channels, providing a comprehensive view of the applicant’s profile and history. This allows lending officers to focus on the analysis and information selection that would help credit approval, which can lead to improved scores in the # Loan officer productivity indicator.
Perhaps more exciting is the future development of AI from companies such as Thelightbulb, which would allow AI to “read minds” by collecting and analyzing unconscious, non-verbal responses, and other biometric cues. This would assist lending officers in analyzing the behavior of a potential borrower’s characteristics during interviews and knowing their customer’s process, which enhances credit risk assessment and helps lenders understand their customers’ needs.
Despite the numerous benefits of using NLP in the SMB lending process, the inherent risk of lending and the tight regulation mixed with the current capability of AI tools remind us that the role of humans remains very important in selecting and analyzing the right set of data. To ensure effective and cost-effective adoption of the technology, financial institutions must fully understand the specific part of the process they wish to enhance. This will involve implementing measurement tools and indicators that can quantify the amount of improvement the technology can bring to the table. Talents must also receive training, not only to operate the technology, but also to understand the boundaries between their expertise and AI capabilities. Furthermore, the company must also ensure that it already has an internal AI governance framework and regular audit systems in place to establish accountability and fairness in the use of technology.
Click here for more in-depth articles and interviews that discuss how artificial intelligence can be integrated into strategy.
Key performance indicators (KPIs) have been the north star guiding business strategy for decades. These criteria measure not only sales and revenue but also customer satisfaction as well as employee engagement.However, as the business landscape continues to evolve at an unprecedented pace, the need for deeper insights and more agile measurement arises. This is where the potential of generative artificial intelligence (GenAI) shines, opening doors to a new era of KPI innovation.
GenAI goes beyond automation to produce entirely novel content. It is a creative catalyst, opening up unprecedented possibilities for KPI innovation. Forget rigid, one-dimensional metrics. Powered by GenAI, KPIs become fluent, adaptive, and poetic, capturing not only the whats but also the whys and what-ifs.
Reimagining KPIs for exponential growth
From static to dynamic: GenAI is capable of integrating dynamic KPIs, meaning they can evolve alongside the company that uses them. KPIs also fit seamlessly into a changing market, with trends and strategies naturally shifting along the way.
Unveiling the unseen: Traditional KPIs often fail to hit the nail on the head by overlooking key, intangible factors that could affect performance. GenAI, however, can delve much deeper. With the help of GenAI, it is possible to determine brand sentiment before a particular campaign is launched, anticipate employee engagement within remote teams, or even predict customer turnover before it happens.
Personalized insights, enhanced action: Data mountains no longer need to be intimidating.GenAI transforms data into personalized narratives, crafting stories tailored to individual stakeholders. Sales teams can access actionable insights, marketing managers can monitor real-time customer sentiment, and CEOs can explore what-if scenarios for strategic foresight. This data-driven storytelling fosters informed decision-making and ignites action across the organization.
A practical guide to unlocking GenAI’s potential for KPI innovation
To effectively utilize GenAI tools like Gemini and ChatGPT for KPI innovation, follow these guidelines:
Define goals and challenges: Clearly articulate objectives, whether uncovering customer sentiment or anticipating market shifts.
Frame specific prompts: Use concise prompts such as “generate potential KPIs for measuring brand sentiment on social media.”
Provide relevant context: Enhance responses by furnishing background information about your industry, business model, and existing KPIs.
Experiment and refine: Iterate prompts, rephrase questions, and provide feedback to improve AI understanding.
Collaborate with experts: Involve human expertise in evaluating and implementing AI-generated insights.
While GenAI’s potential for KPI innovation is undeniable, it thrives on synergy, not substitution. The point is this: human guidance is essential. Act now, invest in your future, and become a master of the new KPI era by enrolling in The KPI Institute’sCertified KPI Professional course.
In May 2023, Samsung Electronics prohibited its employees from using generative artificial intelligence (AI) tools like ChatGPT. The ban was issued in an official memo, after discovering that staff had uploaded sensitive code to the platform, which prompted security and privacy concerns for stakeholders, fearing sensitive data leakage. Apple and several Wall Street Banks have also enforced similar bans.
While generative AI contributes to increased efficiency and productivity in businesses, what makes it susceptible to security risks is also its core function: taking the user’s input (prompt) to generate content (response), such as text, codes, images, videos, and audio in different formats. The multiple sources of data, the involvement of third-party systems, and human factors influencing the adoption of generative AI add to the complexity. Failing to properly prepare for and manage security and privacy issues that come with using generative AI may expose businesses to potential legal repercussions.
Safety depends on where data is stored
So, the question becomes, how can businesses use generative AI safely? The answer resides in where the user’s data (prompts and responses) gets stored. The data storage location in turn depends on how the business is using generative AI, of which there are two main methods.
Off-shelf tools: The first method is to use ready-made tools, like OpenAI’s ChatGPT, Microsoft’s Bing Copilot, and Google’s Bard. These are, in fact, nothing but applications with user interfaces that allow them to interact with the base technology that is underneath, namely large language models (LLMs). LLMs are pieces of code that tell machines how to respond to the prompt, enabled by their training on huge amounts of data.
In the case of off-the-shelf tools, data resides in the service provider’s servers—OpenAI’s in the instance of ChatGPT. As a part of the provider’s databases, users have no control over the data they provide to the tool, which can cause great dangers, like sensitive data leakage.
How the service provider treats user data depends on each platform’s end-user license agreement (EULA). Different platforms have different EULAs, and the same platform typically has different ones for its free and premium services. Even the same service may change its terms and conditions as the tool develops. Many platforms have already changed their legal bindings over their short existence.
In-house tools: The second way is to build a private in-house tool, usually by directly deploying one of the LLMs on private servers or less commonly by building an LLM from scratch.
Within this structure, data resides in the organization’s private servers, whether they are on-premises or on the cloud. This means that the business can have far more control over the data processed by its generative AI tool.
Ensuring the security of off-the-shelf tools
Ready-made tools exempt users from the high cost of technology and talent needed to develop their own or outsource the task to a third party. That is why many organizations have no alternative but to use what is on the market, like ChatGPT. The risks of using off-the-shelf generative AI tools can be mitigated by doing the following:
Review the EULAs. In this case, it is crucial to not engage with these tools haphazardly. First, organizations should survey the available options and consider the EULAs of the ones of interest, in addition to their cost and use cases. This includes keeping an eye on the EULAs even after adoption as they are subject to change.
Establish internal policies. When a tool is picked for adoption, businesses need to formulate their own policies on how and when their employees may use it. This includes what sort of tasks can be entrusted to AI and what information or data can be fed into the service provider’s algorithms.
As a rule of thumb, it is advisable not to throw sensitive data and information into others’ servers. Still, it is up to each organization to settle on what constitutes “sensitive data” and what level of risk it is willing to tolerate that can be weighed out by the benefits of the tool adoption.
Ensuring the security of in-house tools
The big corporations that banned the use of third-party services ended up developing their internal generative AI tools instead and incorporated them into their operations. In addition to the significant security advantages, developing in-house tools allows for their fine-tuning and orienting to be domain and task-specific, not to mention gaining full control over their interface user experience.
Check the technical specifications. Developing in-house tools, however, does not absolve organizations from security obligations. Typically, internal tools are built on top of an LLM that is developed by a tech corporation, like Meta AI’s LLaMa, Google’s BERT, or Hugging Face’s BLOOM. Such major models, especially open-source ones, are developed with high-level security and privacy measures, but each has its limitations and strengths.
Therefore, it would still be crucial to first review the adopted model’s technical guide and understand how it works, which would not only lead to better security but also a more accurate estimation of technical requirements.
Initiate a trial period. Even in the case of building the LLM from scratch, and in all cases of AI tool development, it is imperative to test the tool and enhance it both during and after development to ensure safe operation before being rolled out. This includes fortifying the tool against prompt injections, which can be used to manipulate the tool to perform damaging cyber-attacks that include leaking sensitive data even if they reside in internal servers.
Parting words: be wary of hype
While on the surface, the hype surrounding generative AI offers vast possibilities, lurking in the depths of its promise are significant security risks that must not be overlooked. In the case of using ready-made tools, rigorous policies should be formulated to ensure safe usage. And in the case of in-house tool deployment, safety measures must be incorporated into the process to prevent manipulation and misuse. In both cases, the promises of technology must not blind companies to the very real threat to their sensitive and private information.