June 16, 2023
3 Minutes

AI Renaissance, or AI Catastrophe?

Life can be strange sometimes.

Over the past year, I have kept an eye out on advances in Artificial Intelligence. GPT 3.5 was impressive, but the model must have made more basic mistakes. It was promising, but I needed to see how it would disrupt everything like everyone claimed. I knew advancements were being made, but when GPT-4 was released, I entirely bought in.

My mind soon went on an endless loop, analyzing how the world was going to change, and soon. Everywhere I went, I saw only ways that Artificial Intelligence could make life better. Driving past my local high school, I couldn't help but think that education is poised to be turned on its head. Just imagine a world where every child can get a personalized tutor who can teach them to their exact level, in their language, anywhere in the world.

I'm incredibly excited to see the adoption of AI in the medical field. We've already seen models outperform doctors. This brings up a fascinating philosophical question. If AI can give better medical advice and diagnoses than humans at a significantly cheaper cost, then is it not a moral requirement to replace all human doctors? It's the same argument made by supporters of autonomous driving. I foresee a future where medical and auto insurance companies will quickly adopt AI. There is just too much money to be saved.

That doesn't mean there aren't pitfalls, however. The Chief Science Office of OpenAI has flatly stated that he doesn't fully understand what happens inside the black box. Meta continues to release newer versions of their open-source AI model Llama, but the public has yet to learn what goes into the training. Open source may be open to all, but it's not outside the realm of possibility that these models have nefarious purposes. The most benign being the bias of the training data and human reinforcement, all the way to state actors releasing models that tow a particular narrative.

But the consequences of work and society keep me awake at night. What is going to happen when the cost of knowledge becomes zero? White-collar jobs have been a significant path for many Americans to a middle and upper-class life. What will happen when humans can no longer compete against a computer? Layoffs have already started, and it's only going to accelerate. Society has always undergone technological change, with the common refrain being "People will adapt and find new jobs." But that has always assumed that new opportunities arise that humans have a competitive advantage in tackling. This time, it truly is different. Opportunities that will be created but will be done better than AI. Humans need not apply.

This is a fascinating world with vast societal implications, and the more I dug into it, the more I realized that the only way to understand these systems is to use them myself. I set up my local machine and have started experimenting with a few Llama models pulled from HuggingFace. While slow to respond, I found the performance comparable to ChatGPT-3.5. While not mind-blowing, the open-source community is moving fast. Some folks have even managed to get low-parameter local models to work on their smartphones!

The world is changing fast; we're all here for the ride. Perhaps I'll start an AI project to see how far we've come…

--------

Update! I ended up creating an AI app that blended my hobby of collecting coins and building products. Check out my post on Numi: The World's Most Powerful Coin Grading and Identification AI