It is crucial for data and analytics leaders to stay updated on the latest advancements in AI-enabled natural language query and chatbot technology. Otherwise, they may fall behind and face violations of data and analytics policies, as well as proliferation of content, explains Mike Fang and Bill Finnerty at Gartner.
Analytics and Artificial Intelligence are constantly intersecting, affecting various areas. To take advantage of new opportunities and minimise potential risks, leaders in data and analytics must consider the effects of AI on analytics, data science ecosystems, user behaviour, roles, and decision-making.
According to Mike Fang, Senior Director Analyst at Gartner, spreadsheets remain the primary tool for data analysis due to their simplicity and widespread use. The popularity of web and app-based stand-alone GenAI chatbots allows users to easily and intuitively analyse spreadsheet data for basic tasks.
This bridges the gap between traditional data entry and sophisticated analysis without requiring specialised analytics and business intelligence, ABI and data science and machine learning, DSML software, training, or difficulties in purchasing licensing.
Users have the ability to analyse data within their business processes without the limitations of traditional analytics software, and they are doing so excessively. This quick implementation of these capabilities is resulting in a rise in data and analytics work being carried out outside of ABI platforms, analytics sandboxes, or security policies.
As a result, good governance is also being bypassed, whether intentionally or unintentionally. Gartner predicts by 2025, 40% of ABI platform users will have circumvented governance processes by sharing analytic content created from spreadsheets loaded to a generative AI-enabled chatbot.
Spreadsheets, often referred to as the cockroach of analytics tools — are perennial survivors despite disruptive markets, and they spread when the right conditions arise. With the ability to analyse spreadsheets directly through GenAI chatbots, the use of spreadmarts is expected to grow.
This signals a need for closer collaboration between data analysts and IT departments to manage and govern the proliferation of these generative data silos.
Gartner predicts by 2026, more than 70% of independent software vendors, ISVs will have embedded GenAI capabilities in their enterprise applications — a major increase from fewer than 1% today.
The convenience of AI-enabled natural language query, NLQ without an ABI platform poses a displacement risk for traditional vendors and investments made by data and analytics leaders. Analytics consumers working in this way will reduce their reliance on complex, well-governed analytics software.
The new role of AI in analytics requires data and analytics leaders to think about their data and analytics ecosystems beyond ABI platforms. They must take the following recommendations into considerations to adapt to the evolving landscape.
Focus on AI training
Training modules should be developed for both business analysts and augmented analytics consumers in order to fully utilise the benefits of GenAI. This will ensure that they are able to effectively use these tools for data analysis in a secure manner.
Employ strategic planning
Leaders in analytics and business intelligence must incorporate the use of NLQ chatbots outside of ABI platforms as a technological catalyst into their strategy and operating model. This will be a crucial component of future data analytics workflows.
Integration efforts promote composability
ABI platforms must pursue integration with LLMs to remain relevant in a market where users increasingly prefer analytics embedded within their natural workflows. Vendors must describe how their platforms include a large language model, LLM integration for data retrieval and prompt engineering, while buyers should assess what is available as a plug-in to a third-party application, such as ChatGPT.
Promote collective intelligence
Initiatives should be in place to encourage the sharing of analytics insights generated from GenAI chatbots, fostering a culture of collaboration and shared learning. Training must establish adaptive governance mechanisms to avoid hallucinations from AI chatbots and improve interpretability.
It is crucial for data and analytics leaders and their organisations to stay updated on the latest advancements in AI-enabled NLQ and chatbot technology. Otherwise, they may fall behind and face potential violations of data and analytics governance policies, as well as proliferation of content due to the constantly evolving analytics technology and digital landscape.
Ethics framework for predictive analytics
Automation is crucial for justice and public safety organisations as they navigate changing public expectations and cope with diminishing talent pools in many regions. CIOs must prioritise the ethical expansion of data and technology usage to increase productivity and guide their organisations towards achieving mission objectives.
According to Bill Finnerty, VP Analyst at Gartner, by 2026, over 65% of public safety organisations will establish an ethics framework to guide the use of predictive analytics for proactive incident response.
In recent years, advanced predictive analytics has been successfully used in different industries, such as healthcare and public safety. For instance, the Dubai Police have adopted predictive policing software called Crime Prediction.
This software was created to support the UAE’s Smart Governance Initiative and is specifically designed to complement the modernised approach of the Dubai Police force in preventing crime and ensuring public safety.
However, in the public sector, there has been hesitation and concern surrounding the use of predictive analytics. The public’s fears about potential overreach by law enforcement are heightened by the lack of transparency from public safety organisations regarding their utilisation of data and predictive analytics to enhance their mission objectives.
The growing opportunities to use AI for surveillance and advanced analysis present new risks to the ethical and proper handling of data by public safety organisations. Additionally, the absence of national or international standards leaves both these organisations and communities in a constant state of uncertainty regarding the appropriate use of data and technology in predictive and proactive service models.
Until ethics frameworks are established and normalised, government decision makers struggling to determine the appropriate use of predictive service models will err on the side of caution in evaluating related solutions and miss opportunities to understand the potential mission impact.
CIOs leading public safety and justice organisations must engage leadership or the governance board to establish a working group for developing an ethics framework for the use of predictive analytics in operational areas of the department, and including external parties such as community leaders, academics and industry professionals.
Government CIOs must also enforce a policy of transparency regarding any analytics or Artificial Intelligence models utilised for operational purposes. This can be achieved by requiring vendors to disclose the models they use in their solutions and making this information available to the public upon request.
It is essential to establish a process that allows the community to verify the use of predictive analytics and ensure adherence to department policies, in order to build trust in their implementation.