When Does AI Make Sense?
Every emerging technology is accompanied by CEOs champing at the bit. Artificial Intelligence, or AI, is no different. The C-Suite is throwing “AI” into as many conversations as possible to appease shareholders and give off the appearance of a technologically savvy company. While organizations love to embrace new technologies, they need to be careful and not have a solution in search of a problem. Because AI is a broad term, we are going to focus on one specific subset: large language models (e.g. LLMs/ChatGPT-like applications and models).
When do AI/LLMs NOT make sense?
Charlie Munger popularized the concept of inverting problems as a useful tool for getting to an answer. Using this method, we should consider which characteristics describe problems where LLMs/AI do not make sense as part of the solution.
A Simpler Solution Will Suffice
Any machine learning or AI model requires significantly more resources than a rules-based system. They require a heavier reliance on good data engineering and management and need monitoring over time as the data or model is more prone to degradation. If you can identify a set of rules (e.g. if-then statements) to define your system, then you likely do not need machine learning.
Structured Data
Structured data is data that fits nicely into a tabular format. Common examples include Excel files and relational databases (e.g. Oracle, Teradata, PostgreSQL, etc.). If the vast majority of your data is in a structured format, then there are likely simpler solutions than using an LLM. There are numerous models out there with proven success stories that come with less risk; including linear regression, time series models, or tree based models (e.g. XGBoost).
Someone Else Will Build It
Investing in projects that incorporate AI models requires long term planning. These projects can take months or even years so the solution needs to consider technological advancements over the next several years. Utilizing LLMs to summarize Outlook emails, Zoom meeting transcripts, or Salesforce notes is likely a waste of time because Microsoft, Zoom, and Salesforce will likely integrate LLM-based features into their own products.
Explainability > Predictability
Imagine you work for Coca-Cola and your boss asks you, why do you think soda sales will increase next year? I don’t think they will accept your explanation that “AI predicts that sales increase due to patterns in the data.” Black box models (e.g. neural networks) can be more predictive than explainable models (e.g. linear regression), but you need sound explanations in order to explain the “why” behind decisions. More information does not mean more knowledge.
Accuracy is Paramount
Hallucinations are a feature that likely is not going away anytime soon with LLM-derived answers. In fact, they might be a feature rather than a bug when it comes to LLMs. Hallucinations are extremely dangerous in fields where false positives or false negatives can be the difference between life and death. I would not want my doctor using ChatGPT for any medical inquiry where a hallucination could have an adverse impact on me.
Your Company’s Technology Adoption is 10 Years Behind
If your organization is still transitioning to the cloud, has poor data quality, and/or struggles with the average digital transformation project, then they are not ready for a significant investment in AI/LLMs. There are likely lower hanging fruit for organizations that are far behind the technology curve.
When do AI/LLMs make sense?
If you are considering LLMs/AI as the technology to solve a problem, then your problem:
- Cannot be solved by a simpler solution (e.g. rules-based system)
- Will NOT be solved by another company offering a SaaS solution in the near future (e.g. Microsoft, Salesforce)
- Involves a significant amount of unstructured data (e.g. word documents, video files, images) and/or semi-structured data (e.g. JSON files)
- Does not require explainable answers or a “why” for stakeholders
- Does not require extremely high accuracy
- Is a sensible next step for investing in technology (i.e. not jumping from Excel analyses –> LLM solutions)
After checking off the more obvious criteria above, there are a few more items that solidify LLMs as a tool worth considering. These include:
Data Security/Privacy
In the free version of ChatGPT, the model has the freedom to use any prompt you enter as part of the dataset for retraining the model. Therefore, any data you enter into ChatGPT is NOT private and might eventually be displayed to another user as part of the response to their prompt. While the paid version of ChatGPT allows users to turn off the model’s ability to learn based off of your prompts, the risk still seems present. Emerging technologies present unknown unknowns and you are putting a lot of faith into another organization. If your company’s competitive advantage is its data, then you cannot take the chance that another company could learn from your data — even if it is unintentional.
Proprietary Data
Matthew McConaughey stating that “Data is the new gold” is a bit of a hyperbole, but there is some truth to the statement. Proprietary data in some situations can be incredibly valuable. It isn’t too farfetched to imagine Bloomberg expanding upon their information edge with AI models trained on their plethora of data. How about Google or TikTok using the millions of videos to build an application that allows users to create AI-generated videos? LLMs open up new revenue generating opportunities for organizations with with valuable proprietary data.
What type of AI investment makes sense?
While machine learning is not a new technology, LLMs have slightly different needs than traditional machine learning systems. For technically-savvy organizations, jumping into LLM use cases is a no brainer. For many organizations, I would recommend pursuing one of two options:
- Build out the data and machine learning infrastructure to support AI model development. This would involve investment in a data lake, a machine learning platform, and likely a vector database to store word embeddings.
- Focus on an internally-facing 3-6 month pilot project that represents an easy-win. The pilot project might not yield an immediate return, but the lessons learned would, at a minimum, feed into a more worthwhile endeavor.
While this is an exciting new technology, organizations must remain agnostic when searching for the right tool to solve their problem. I look forward to seeing how this domain evolves over time.
~ The Data Generalist
Data Science Career Advisor