The relentless growth of Software-as-a-Service (SaaS) has been one of the defining success stories of the past decade. SaaS is now the default model for new software products, and there are many seeking to replicate this kind of innovation in other industries. We are in a world where almost anything can be obtained “as-a-Service”.
But SaaS disruption does not stand still. The innovation focus for the cloud software pioneers, as well as those that followed in their footsteps, has shifted to a new realm: Artificial Intelligence (AI).
This summer, Panintelligence surveyed 55 leading SaaS companies on their use of AI, and how it fits into their innovation and investment plans. We discovered that three-quarters (76%) of SaaS companies were already using or testing AI in their businesses; two-thirds (67%) have already added AI capabilities to their products; and another 23% are considering use cases.
Machine learning algorithms are the most common AI technology used by SaaS vendors today. Almost half (43%) have introduced it into the products and another 15% to back-office operations.
But the single biggest source of AI innovation in SaaS today is Generative AI. More than a third (38%) of vendors have rolled out Generative AI capable of generating text, images or other media within their products. All of these were in the last 12 months.
And another 15% of SaaS vendors are testing new Generative AI capabilities.
Almost all SaaS leaders we spoke to said that their innovation efforts aimed to improve customer satisfaction and loyalty, differentiate their offerings, meet demand for new functionality, and create new features for upselling opportunities. These were objectives for at least 90% of those we surveyed.
The primary driver of AI innovation directives? Company boards and investors. There is a palpable fear of missing out on the transformative potential of AI.
Data quality is non-negotiable
SaaS vendors are well positioned to usher in transformative AI capabilities into their platforms, benefiting from the ability to cultivate and refine models using the rich data resources derived from their user base. In doing so, they provide the clearest path possible to make AI accessible to the millions that use their platforms on a daily basis.
However, while almost all (94%) SaaS vendors have made data security and privacy a strategic focus and continue to pour significant resources into maximising the resilience of their platforms and data assets, data quality remains a second-class citizen. There a multiple data quality issues affecting the rollout of AI in SaaS, and how we address them today will have a significant impact on the industry’s future.
As long ago as 2018, Gartner predicted that 85% of AI projects could yield erroneous outcomes due to data bias, algorithmic issues or inadequately skilled teams. Our research suggests many vendors have yet to fully address these critical challenges.
Data quality issues can take many forms and lead to flawed analyses and predictions. Missing values or errors in data can hinder the performance of AI models and reduce the reliability of insights. Inconsistent data formats, units, or naming conventions can create confusion and lead to errors.
Duplicated data can skew analyses. And bias in data, however unintentional, can result in discriminatory outcomes and unfair decisions in areas like hiring, lending, or recommendation systems
Our research found that more than a third (37%) of SaaS vendors believe quality issues stemming from having enough relevant and reliable data remain a barrier to the adoption of AI. And just 28% – a third of those developing AI functionality – are working on the kind of data quality initiatives required to support highly robust and accurate AI models.
A lack of relevant and reliable data poses significant challenges when it comes to AI adoption in SaaS. Vendors are at the forefront of adopting AI and will be among the first to feel the impact of AI failures.
AI regulation: a significant barrier for the SaaS industry
With up to two-thirds of SaaS companies training their models on data that could compromise prediction accuracy and create unfair or discriminatory outcomes, the threat of regulation looms even larger.
Data quality issues undermine the effectiveness of AI and present significant hurdles to complying with evolving regulations. And over half (52%) of companies say regulation is a major barrier to AI adoption, reflecting the current uncertainty around legal frameworks for AI.
Policymakers across major jurisdictions are harmonising their directives, emphasising the imperative for AI systems to avoid causing harm, uphold privacy standards, and eliminate discrimination. This presents a substantial and intricate challenge that SaaS companies must tackle proactively.
Those who have yet to prioritise data quality could face significant risks from training AI systems on data that compromise prediction accuracy and engender unjust or biased outcomes. The aftermath could entail substantial costs, encompassing the extensive undertaking of retrospective data cleaning and processing.
Keeping a human in the loop on the journey to AI
SaaS companies must prioritise data quality, transparency, and regulatory compliance to fully realise the potential of AI in their products. They need to implement robust data quality management practices, use new tools to fully understand how their models work, and establish clear data governance frameworks.
Without checks and processes to ensure data accuracy, issues can propagate through the system. Some industry estimates put the cost of bad data at between 15% and 25% of revenue for most companies, and that was before the rapid adoption of AI. Training AI models that automate decisions, predictions or recommendations on flawed data can only magnify this negative impact and cost.
Historically, humans have provided a counterbalance to data quality issues. There are many scenarios where a skilled data scientist or subject matter expert might look at a dashboard and see, based on experience, that something is wrong. We must keep a human in the loop, and ensure that they can inform, understand and explain how AI models think and work.
In this context, Causal AI will be an increasingly valuable tool for vendors, enabling them to assess the quality of models and data (proactively and retrospectively) while identifying and mitigating any biases at play. This will be a vital weapon in the fight for right, particularly in light of the growing demand for transparency and the ability to explain the inner workings of blackbox AI models.
This combination of human and machine will support more effective AI-driven solutions and data-driven decision-making by ensuring that the data used for AI training and analysis is accurate and reliable and that the models they inform deliver value and remain compliant with regulations.
The post Data is the foundation of AI, and quality is non-negotiable appeared first on Datafloq.