Every executive I have spoken with during the past year is scrambling to place their AI bets, to figure out how to navigate the hype from what is real today and what will the future bring. Recent popular press musings abound on the topic of OpenAI and whether one is an AI "Doomer" or "Accelerationist" does not help.
While the commercialization of generative AI seems entirely new, exotic, and different than anything before, in many ways how prior technology hype cycles have unfurled can provide insights into how this latest AI wave may too, and more importantly, provide principles for how to navigate it with success. The article offers insights and recommendations to provide executives with a firm footing to evaluate opportunities and tradeoffs amid this hype cycle.
For context, the mixing of OpenAI’s mission to “to ensure that artificial general intelligence benefits all of humanity” does not easily comport with the $13 BN investment from Microsoft to attempt to dominate the enterprise AI landscape, in the midst of what is likely the largest tech hype cycle we have seen in the industry. Relying on OpenAI, Microsoft, Google, and other vendors to answer this question for you will provide an interesting, entertaining perspective, but one that’s not likely grounded in reality.
I don’t know which hallucinations are stronger: those that generative AI systems deliver today or those propagated by some vendors. Gartner in their “Hype Cycle for Emerging Technologies, 2023”, agrees generative AI is at the peak of the hype cycle and early in its commercial maturity and viability. Let's dig in.
What’s past is prologue
Neural network-based software, like ChatGPT, combined with specialized AI hardware, e.g. Nvidia, may seem like entirely new developments, yet the foundations for these technologies and how tech hype cycles work are not new. My observations and recommendations are based on decades in tech, including early neural networks research, building and bringing to market expert and NLP systems, and developing commercial enterprise systems at every level of the enterprise tech stack, including AI ML use cases in their enterprise (more of my background can be found here and here).
Having had a front-row seat (if not some “game time”) during several technology cycles, here are some emerging patterns and recommendations to navigate the AI shark-infested waters today.
Recommendations from lessons learned
Avoid vendor lock-in; play the vendor field to your advantage
Get your data (ware) house in order
Create a cross-organizational “AI SWAT Team” of leaders and practitioners who have a track record of achieving outcomes to identify AI ML areas of opportunity
Investigate specific, focused use cases for ML today
Use an agile product framework. Fail fast and move on, with lessons learned.
Get your shovel and let’s dig deeper.
Avoid vendor lock-in; play the vendor field to your advantage
Your enterprise data forms the foundation for effective ML. LLM search is not a “Get out of jail card” that will magically overcome bad data. Bad, incomplete, dirty, duplicated data will skew the effectiveness of any application of AI.
Strategically, your enterprise data represents a massive competitive “land grab” for all enterprise and AI platform, application, and tools vendors. Make sure that your strategy and vendor agreement assures you have control of your data and destiny for the short, medium, long term. Avoid vendor technology and agreement lock-in, unless “the deal” is so incredibly good and you can engineer a “rip cord” for your emergency chute to pivot in other vendor directions.
2. Get your data (ware) house in order
Digging into the data dirt, during the past few years, enterprises have made a good dent in busting through application and data silos, integrating departmental and cross-organizational applications into data warehouses, operational data stores, data lakes, etc. Building very specific queries, we can often get at the data we seek for decision support purposes and operational systems. Most enterprises are still muddling through apps designed and built for a single function and departmental, and only after the fact sought to put to use cross-organizational data. Rare is the enterprise that starts by designing wing-to-win cross-organizational processes upfront as a framework for designing wing-to-wing automation within and between applications.
This typically results in lots of duplicated, overlapping data, because it has been easier to recreate data from one application or data store, rather than invest the time and energy to design, implement, and establish the right data integration pattern upfront. Dirty, duplicated data - data that is designed to serve a single department or function skews the accuracy and limits the usefulness of ML automation and analysis. Incomplete data within departmental applications often leaves gaps for balanced, relevant regression analysis on an enterprise-wide basis.
I recommend biting the bullet and establishing data governance oversight immediately. Data (or any) governance does not have to be a big-bang “all or nothing” undertaking. Create the case at the leadership level to make the investment now, or suffer competitive disadvantages as ML generative AI matures. Bake it into your SDLC and incentivize and gamify with your product and IT teams, and your business stakeholders. Make quality data a must-have check for your entire product team, including UX. Communicate data governance as a journey with milestones along the way.
To emphasize the wing-to-wing process engineering mindset, communicate and emphasize with your team a customer and user-first mindset. When employees think of customers and users first, they engineer with them in mind, for them. Incentivize, reward, call out customer-centric design, implementation, and maintenance - not just for your front-office team members (Sales, Support), but for the people building the systems.
3. Create a cross-organizational “AI SWAT Team”
AI ML can be done on a departmental siloed basis, but the most valuable returns are cross-functional and organizational. For example, NPS is a reasonable measurement for CSAT, but it doesn’t necessarily give you an accurate Lifetime Customer Value calculation that includes a prediction based on a roll-up of what the customer has purchased and their engagement pattern at an individual and aggregated customer level. ML-based predictions that evaluate data cross-departmentally can yield the most valuable insights.
Bring together the best contributors within the enterprise to place bets, make decisions, and experiment. Team members should be included from multiple parts of the organization, including:
Functional domain leaders (sales, marketing, service)
LOB stakeholders
Operational leaders who know first-hand the experience of workers, users
End-users of the systems
Product team members
IT
Legal/Compliance
This is a cross-functional undertaking, so be wary of individuals that are trying to “own the agenda”. The spirit and action of collaboration should take precedence. Executive leadership should message, and incentivize the right behaviors. Blockers, agendas, and egos should be managed.
4. Investigate specific, focused use cases for ML today
What tools or platform you prototype on does not have to be the same as the platform you may ultimately choose for production deployment. How ML is applied is evolving so quickly that testing and POC validation before choosing your GTM platform may better serve you.
Charge your team to go through an open-minded brainstorming use case process. The process can be fun with rapid rewarding outcomes that also serve to identify your talent while building cross-organizational collaboration muscle.
Rather than pursuing moonshot use cases with higher risk/reward, consider identifying predictive use cases that can be embedded within your current automation, rather than pursue a bet-the-company set of use cases that are high visibility (e.g. your customers). For example, consider using AI to route customers to agents, versus a riskier LLM support channel. If you choose to pursue a generative AI use case, put human guardrails in place, e.g. rather than just “turning on” AI-generated emails from customer and case records that sales reps and agents can directly fire off to customers, consider instead having an analyst identify the ten most common customer interaction themes (a la templates) and allow a selective group of power users to test the templates for accuracy, consistency, value.
Another good idea is to set up your POC first to send a generative email to be routed and approved by an analyst or other subject matter expert, before the rep or agent can send the email. Small, tight experiments can yield hits and misses quickly and with low risk.
Once you have narrowed down your list of target use cases, apply rigor to quickly chart the use cases in a simple matrix, documenting the following criteria.
Business function (Sales, Service, Marketing, RevOps)
User persona(s) impacted
Cost breakdown of each manual or currently automated approaches
Cost of hand-offs between people, systems, processes
What are current and alternative approaches?
What is the compliance and legal cost of getting it wrong?
What is the revenue, cost, and market mover upside?
What is Time-To-Market and Time-To-Value?
What is the operational, reputational, and opportunity cost risk if the selected use case does not pan out?
How can we quickly test the biz value hypothesis?
What are the vendor options, and trade-offs? How can we leverage vendors to “prove” the value? What skin will they put in the game?
No doubt, your prioritized use case list will change over time as a function of how your business evolves - particularly the quality, accuracy, and completeness of your enterprise data, its governance, process coverage, and integrations, as well as the market maturation of commercialized AI platforms and tools.
5. Apply an agile framework for AI use cases
A measured approach for enterprise customers might include more specific, surgical applications of AI ML, building a POC on one or more AI tools to determine what does(n’t) work. The technology is evolving so quickly that experimenting, before deciding on which platform to ultimately bet your business on, can work in your favor.
Vendors do what vendors have always done: they hype, and flog all the sexy “magic” of technology, in this case generative AI, pushing enterprise customers to buy a big helping of vendor X or Y’s flavor of AI. “Vision” is an important driver for selecting and spending with a vendor, but the difference between vision and hype is an important, nuanced one.
Actionable plans can be distilled down fairly quickly with a series of focused, high-impact workshops. Motivate, incentivize. Wash, rinse, repeat, and don’t drink all the vendor kool-aid in one sitting.
Let us help you move faster with confidence, and drop us a line.
Comments