Data — the right data — created and shared to the right network can provide added security, growth and alignment across departments while streamlining supply chain tiers. But recent global conversations about immersing artificial intelligence (AI) into businesses are giving rise to “what-ifs” and “what could bes” to come.
It’s a Hollywood-based drama, literally. The recent Writers’ Guild Association (WGA) strike calling to “regulate use of material produced using artificial intelligence or similar technologies” represents fears brewing in multiple professions as the data integrity debate rages on.
Battle Lines Have Been Drawn
The slow though seemingly rapid deployment of AI is generating a seesaw of opposing intellectual stances. Members of the WGA fighting against producers, networks and studios wanting to bring AI into the pre-production fold feel steadfast on maintaining the authentic craft of human writing. Their concerns go beyond the obvious risks generative AI poses to their jobs, but also in how it will impact the content and final product, leading to lower quality and less revered works.
For years, digital networks and the algorithms created dictated what content reigned supreme. The term “content is king” was the backdrop to website and business success or failure. Now, data integrity lines are blurred from “fictitious facts” forcing businesses and consumers to question what is real or whether it matters. The outcries from the Hollywood elite may be reminiscent of supply chains story to come.
Respecting the Unknowns
Much like the coronavirus pandemic-related disruptions to worldwide trade, socialization, economic health and the environment, we know what we know from when it happens. With the unknowns in AI, its integration into organizations can create open-ended exponential risks in perpetuity, according to technology’s most innovative thought leaders and digital inventors.
Concerns from 1,500 technology leaders were shared in a letter published by the Future of Life Institute, protesting the continuous, reckless and rapid advancement of AI development and deployment. Leaders, including Elon Musk, stated that when technology is ill-managed without “commensurate care and resources,” its accuracy, safety, transparency, trustworthiness and loyalty are in jeopardy.
The letter also advised innovators to step back from “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.” It further urged AI developers to “work with policymakers” and hasten governance systems development.
The European Union (EU) responded. The Associated Press reported that Meta, TikTok, Google, Microsoft and other technology giants committed to culling disinformation on their platforms were implored to do more by Vera Jourova, EU Union Commission vice president. She suggested the creation and use of an “AI warning label” to provide a new level of protection from untruthful AI-generated content.
United Nations (U.N.) secretary-general Antonio Guterres plans to present a new code of conduct putting guidelines and regulations in place to support information integrity across digital platforms. New advisory boards and an artificial intelligence agency will develop code tenets to be used by governments, tech companies and advertisers promoting data truth and integrity.
While much of business focus is on everything new in data, a prudent and near-term solution toward gaining greater data integrity may be found in the data already in place.
Clean and Lean
Less is more may be sage advice when weighing the decision to integrate generative AI in the near future, says author and The Classification Guru owner Susan Walsh. Because most people are focused on adding data, with little to no time dedicated to reviewing what is already there, misinformation, duplication, bad links, out-of-date content and context, as well as contact or accounting inaccuracies happen over time, she says.
“Just like anything else, when no one’s minding the store, trouble and disarray show up,” Walsh says. For companies with a digital footprint, large or small, global or local, “data disarray dirties all the good input that’s been done, and AI is another vehicle bringing more data to your platforms, faster, adding to the mess,” she says.
She advises businesses to clean and scrub their data. Walsh developed a COAT system enabling data that is consistent, organized, accurate and trustworthy. Visualize a coat covering data, a protective layer against exposure to unwanted elements — many of them, human derived.
By identifying areas of duplication or misclassification, for example, the data can be condensed using singular new formats, naming conventions, stock-keeping units (SKUs) and more. “Imagine how much time is wasted and the amount of frustration put on people when the same data is classified in multiple ways,” she says. Clean data is the optimal starting point for successful AI deployment.
Data as a Product
As a broad definition, content and coding are data with varying complexities and applications. When considering data as a product, opportunities expand in its use, importance and marketability.
The term “data mesh” refers to an architectural pattern across enterprise-wide data platforms that bring data domains, data products, self-serve platforms and federated governance together. The process provides scalable analytics beyond single platforms, beneficial when integrating AI.
This approach helps manufacturing companies dissect data silos and align them, foundational to enhancing cohesiveness and workability across supply chains. Succinct data systems support collaboration, a gateway to digital transformation.
Data Integrity Is Human
Supply chains in search of data integrity may be standing between a rock (data) and a hard place (AI). At its core, AI is a model that learns from the data input humans provide, currently a cause with an effect to be determined.