AI Will Not Replace Tech Jobs

In an era teeming with technological advancements, the fears surrounding artificial intelligence (AI) have become dramatized to say the least. The widespread concern that AI will displace tech workers, among many others, seems to be the prevailing narrative. Do not let the media’s obsession with doom and gloom get the best of you. Before I dive into why I stand firmly against this narrative, let’s see what the tech workers are up against.

The Automation Threat

LLMs like ChatGPT are capable of some incredible things; including text generation, text summarization, getting recipes based on a set of ingredients, generating or troubleshooting code, generating comic strips, and more. If you talk to any programmer, the majority will make bold claims that GPT-like models 10x their productivity. I am part of the majority here in complete awe of its capabilities. On a recent project, I was able to walk ChatGPT through a role-based prompt situation to get the model to output a nearly perfect sequence of python code I needed to build a network chart for work. This was MUCH quicker than reading the python package’s documentation and looking at Stackoverflow. If most agree that coding efficiency increases with LLMs, then surely we won’t need as many tech workers, right?

Reason #1: Jevons Paradox

Jevons paradox refers to the efficiency dilemma where the improvement in efficiency of a resource will increase the demand for it. It is often applied to energy where more efficient energy usage leads to cheaper energy prices which increases demand for it. If programming becomes easier, the salaries for some technical roles will likely decrease. With cheaper salaries, employers can afford to hire more of them to build and maintain digital systems. After all, the demand for building and maintaining digital systems is not going away.

Reason #2: Our Reliance on Tech Increases

For every digital system a company builds, they are increasing their reliance on technical skills and knowledge. While generating and troubleshooting code is getting easier, the complexity of organizations’ systems and processes is increasing exponentially. You have legacy systems that need to be maintained, online and offline dependencies, cloud vs. on-premises environments, different types of APIs depending on the use case, data lakes and data warehouses, real-time and batch data pipelines, security concerns everywhere, and more. It is no coincidence that technical job titles are commandeering “engineering” and “architecture” in their names. They are building and maintaining the digital version of a company’s architecture. These responsibilities span much wider than simply writing and maintaining code.

Reason #3: Tech Is WAY More Than Programming

Technical roles are constantly changing over time because of the development of new technologies. With new technologies and automations, coding is often abstracted away through a no/low code interface. The breadth of technical subjects has created specializations in each domain and opened the door to “technical translators” between business and tech folks. Technical job titles and requirements are constantly evolving over time as coding has become a smaller slice of the average tech workers’ responsibilities. Some technical skills and knowledge outside of generating/maintaining code includes:

  • Gathering technical requirements for application development
  • Building data visualizations in PowerBI or Tableau
  • Automating workflows using Microsoft Power Automate
  • Developing analytical pipelines via tools such as DBT, Alteryx
  • Managing the development of a web application
  • Quality Control/Testing of code or Defect Management
  • Managing the breadth of stakeholders and diversity of requirement needs for application development
  • Building and showcasing demonstrations of various technologies (e.g. Mixpanel sales role)
  • Configuring Salesforce to digitize a departments’ various workflows
  • Designing architecture or database diagrams in Microsoft Vizio
  • Building web forms via a no/low code tool (e.g. Qualtrics)
  • Analyzing data in Excel
  • Developing user documentation for an API or web application
  • Writing test cases before a web application goes into production

Reason #4: AI Advancement is Cyclical

ALmsot every day it seems like there is a new generative AI model, an interesting use case that someone thought of, or a new application developed on top of these models. The pace of AI advancement in this current environment is staggering. For those in tech, it is impossible to keep up with all of the interesting technologies being developed in this space. WHile the rapid advancement seems limitless, the historical perspective would say otherwise.

Advancements in artificial intelligence are known to go through cycles of progress and stagnation. Since 1950, there have been 3 periods of progress and 2 periods of stagnation.

The latest advancements were facilitated by the cloud and advancements in chips (e.g. GPUs). While AI development seems unstoppable today, there is likely another AI winter in our future.

Reason #5: Large Organizations Move Slow

Large organizations are famously slow when it comes to tech development. A perfect example of how long it can take for large organizations to adopt new technologies is the cloud. Amazon first launched Amazon Web Services (AWS) in 2006, Google launched Google Cloud Platform (GCP) in 2008, and Microsoft launched Azure in 2010. Over a decade has passed since these cloud platforms were established, but “moving to the cloud” is still a high priority for many organizations. Many organizations still have some systems utilizing outdated technology, such as Microsoft Access. One thing we know for sure is that integrating LLMs into web applications within large organizations will not happen overnight.

Reason #6: Limitations of LLMs

LLMs are capable of some incredible applications; however, I’d wager that accuracy will always be a HUGE limitation because these models are a black box. Experts have some understanding of how LLMs work, but there will never be a straightforward, clear answer as to how and why they produce any outputs. For example, if you upload a spreadsheet of data into an LLM model, it could produce some incredible exploratory data analysis outputs (e.g. analysis, conclusions, visualizations). However, the analysis, conclusions, and visualizations might have errors, incorrect conclusions, wrong assumptions. If the accuracy is questionable, then the next request will be for someone to validate the LLM’s output. Who do you think will do this quality check? A tech professional.

Another limitation of LLM’s is the person using it. The outputs from the LLM are only as good as the questions it is asked. If you are not a data professional, will you know how to ask questions and interpret the answers regarding the following? I highly doubt it.

  • Data leakage when building a predictive model
  • Bias in terms of data collection
  • Should the outliers be included or excluded from the data analysis?
  • Which machine learning evaluation metric (e.g. precision, accuracy, etc.) is optimal for this use case?
  • Which python package can speed up this step of the data analysis?

The usefulness of the LLM output is highly dependent on the user’s specialized knowledge in the topic.

Conclusion

ChatGPT has ushered in the next AI hype cycle with some impressive abilities that appear indistinguishable to magic. Venture capital and the media have picked up on this wave and will only accelerate its adoption. While the new AI developments are impressive, I would advise aspiring or current technical workers to embrace LLMs rather than fear them. They are more likely another tool in the toolbox for technical workers rather than a threat of obsolescence.

~ The Data Generalist
Data Science Career Advisor


Other Recent Posts

Leave a Reply