If you haven't run into a news article / podcast / talking head obsessing over Artificial Intelligence and/or Machine Learning (often affectionately grouped together as AI+ML) over the past 9 months or so, you've likely been living in a cave. Not surprisingly, the hype cycle for AI has officially begun and for those of us that have been around long enough to remember prior tech hype cycles, this feels eerily familiar. I thought I would jot down a few thoughts on why I think AI/ML is important, how it's categorically in a different league compared to other tech innovations of the past couple of decades, and the interplay of technology innovation and venture capital. Finally, I'll wrap up this post with where I actually think all of this is headed if indeed humans will need to co-exist with pervasive machine intelligence in the future.

Uh, what exactly is AI ?

First off, let's define exactly what the terms Artificial Intelligence and Machine Learning mean (and equally important, what they are not). John McCarthy said the following way back in the 50s:

"Systems that perform actions that,
if performed by humans, would be considered intelligent" - John McCarthy, Dartmouth Conference (1956)

Sounds kind of obvious, right? McCarthy (1927-2011) was an American computer scientist and cognitive scientist who is widely recognized as one of the founders of the field of artificial intelligence (AI). He was the one that officially coined the term "artificial intelligence" in the aforementioned Dartmouth Conference. This is probably the first thing worth noting: AI is not a particularly new field and has been around more or less since the beginning of computer science / computing as a formal discipline. To understand why AI has received so much recent buzz, we need to understand the role of data, statistics, and analytics in how AI works. For that, let's start by getting a brief definition of the term machine learning. What better place to do this than OpenAI's ChatGPT ;-). So here it is, in ChatGPT's own words:

🤓
Machine Learning is a subfield of Artificial Intelligence (AI) that focuses on the development of algorithms and statistical models that enable computers to perform tasks without explicit instructions, instead relying on patterns and inference from data. The key idea is that machine learning models are trained on data, and they 'learn' from this data. The learning can be supervised (the model is provided with input-output pairs and learns to map the input to the output), unsupervised (the model identifies patterns and structures from the data itself), or semi-supervised (a mixture of supervised and unsupervised learning).

Bravo, ChatGPT, well said! So now, putting two and two together, it may make some more sense as to why AI/ML has taken off more recently. The advent of cloud computing at scale has allowed management of data storage and processing pipelines exponentially cheaper than what was available before. There's some nuance to this when it comes to GPUs (graphics processing units) in cloud data centers and their role in AI/ML model training, but that's diving into semantics and weeds a bit more than necessary. Suffice it to say, computing at scale is orders of magnitude cheaper and more efficient than it was prior to the 2010s. The sheer scale of data being leveraged to train modern ML models is massive. In the case of OpenAI's ChatGPT, the  input sample set is essentially the entire internet. Yeah, the whole web, at least in a given point in time. Google's Bard takes this a step further by providing live access to the Internet as the model executes and iteratively trains. If you're a little more tech savvy or hands on, look up AutoGPT here: https://github.com/Significant-Gravitas/Auto-GPT (it's open source and free).

Cloud computing makes this type of modeling and iteration cost-effective and reasonable to pursue. Once this was actually done (most notably by OpenAI), the results kind of took the world by surprise. ChatGPT and Bard are focused on a type of machine learning based on LLMs (large language models). I could write an entire blog post on that alone, so instead I will let you find out more about that on your own...dare I say through ChatGPT or Bard!

Tech Innovation - What Makes AI Different?

"This time is different..." - famous last words for plenty of prior tech innovations. Some of these innovations truly were transformative; think of the personal computer or Internet and their impact on virtually every facet of our lives over the last three decades. Then came the smart phone. Many people think that the proliferation of social media products also fall within this category, although I would characterize that as more of behavioral pattern change than a true path-breaking technical innovation in and of itself. Regardless, the thing that makes AI different is its potential to fundamentally change the equation as it relates to human productivity. Computers were always designed to be better at synthesizing large quantities of data and follow specific instructions. The leap that AI affords computing is to leverage those strengths to learn and adapt solutioning for specific tasks. Even if that adaptation is limited within certain boundaries, it will have profound impacts on the level of productivity gains achieved in completing tasks that otherwise would have only been possible with human intervention. In my opinion, this exponential bump in productivity is what makes AI different and on par with the kind of impact we witnessed with the advent and proliferation of the Internet. In some ways, I would argue that the potential for change is even greater with AI. Here is a small sampling of where we might see such disruption to the status quo:

  • Manufacturing: AI could be used to automate tasks such as product design, quality control, and logistics. This could lead to significant cost savings and increased efficiency.
  • Healthcare: AI could be used to diagnose diseases, develop new treatments, and provide personalized care. This could improve patient outcomes and reduce healthcare costs.
  • Finance: AI could be used to automate tasks such as trading, risk management, and customer service. This could lead to more efficient markets and better financial services for consumers.
  • Retail: AI could be used to personalize shopping experiences, recommend products, and manage inventory. This could lead to increased sales and improved customer satisfaction.
  • Transportation: AI could be used to develop self-driving cars, trucks, and airplanes. This could revolutionize the transportation industry and make it safer and more efficient.
  • Education: Generative AI can be used to offer personalized tutoring to many different types of learners and audiences. It can be used as a tool to augment traditional teaching methods.

Venture Capital / Tech Innovation Interplay

I wanted to take a moment to talk about the commercialization of disruptive technologies and what history shows us in terms of how these disruptive trends tend to play out over time. The mechanism by which capital investment is able to pursue returns on high-growth, high-potential disruptive technology is undoubtedly the venture capital (VC) asset class. VC funds are, by definition, pooling money from their limited partners (LPs) to make extremely high risk, but equally high reward bets. Unlike other asset classes that may permit normal drawdowns in the range of 8-20% over their lifetime, it's not uncommon at all for many VC investments to go entirely to zero. This is compensated by the few winners that return on the order of 100x the initial investment. Understanding these high stakes is fundamental in understanding the herd mentality of VC funds and why we tend to see "hype cycles" when new disruptive technology trends emerge.

Connecting the dots between what I just mentioned to the world of AI, my going assumption is that we will see an AI investment boom over the course of the remainder of this decade for any/all start-ups that mention the words "Artificial Intelligence" in their pitch. We are at the very early stages of this investment cycle and we'll see plenty of start-ups join the gold rush to try and find the most compelling products that will shape the next generation of growth and adoption by the masses. That's not necessarily a bad thing...as long as we understand the the vast majority of these investment dollars and corresponding companies will go nowhere and eventually get grounded. The few winners that will emerge will essentially be the next crop of multi-billion dollar equivalents to the FAANG (Facebook, Apple, Amazon, Netflix, Google) companies we're used to seeing today. In other words, the dollars are likely chasing AI for good reason, but unfortunately the herd mentality of VC funding tends to over-index on quantity over quality and also tends to get time horizons wrong. When we experienced the dot-com bubble of the late 90s / early 2000s, it wasn't that people got it wrong fundamentally on the disruptive potential of the Internet; it's that too many dollars were chasing immature, nascent technology companies way too early. While most ended up crashing and burning, a decade later, the internet companies that emerged were indeed juggernauts, the most compelling one being Amazon.

Final Thoughts

So what does all of this mean and where are we headed in the future? I'm not one to buy into hype cycles very easily, but I must admit, that I am sold on the future potential of AI. The reasons are because I've already experienced the productivity gains first-hand myself. Here are the tools I've used myself so far that have shown a clear value proposition and productivity boost for the type of work that I do:

The tools above easily save me hours of work on any given day. That adds up over time. So my hypothesis is that in the early days, AI tools will offer an unprecedented boost to productivity for many knowledge workers. What this also means is that the bar for truly creative, innovative work just got much higher. I'm a software engineer by profession and am amazed with how well AI LLMs have mastered generating boilerplate code. If you find yourself engaged in relatively mundane, repetitive work as a knowledge worker, watch out; it's likely that AI will be coming for your gig in the not too distant future.

This also creates an opportunity, however, for people to really focus on the skills that are not easily automated and require human creativity, ingenuity, and the ability to piece together disparate knowledge to accomplish meaningful objectives. That's something AI still has a long way to go to master. I would also make the case that it's in one's interest in pursuing a career as a generalist  over a specialist  (even more so now than before) for the same reasons as I just mentioned earlier. Well, that's a wrap for this post! There are lots of exciting topics that I can dive into further from this post and I'll keep those in mind for the future musings.