arrow

AI wave


Just because we can doesn't mean we should.

The AI wave is in full force — every piece of news, every conversation in the Bay Area, every day comes with some AI twist. It seems to be making things better on the whole. But how do we know?

In the software world, it has devalued the work that I do. My sense is that there's a perception that software engineers are less valuable — or at least can be replaced by more junior engineers with the assistance of AI. The market demands higher output from them and offers less pay.

Some may say this is a general market correction. Is it a coincidence that ChatGPT hit 1 million+ users just around the time the correction happened? Perhaps, but my gut says it propelled balloonish perspectives that the promise of AI would wipe out the need for software engineers; it was part of the reasoning for mass layoffs, a strategy to "cut fat" now and make up for the workforce with AI tooling.

While I love AI, while it makes my work easier, while it makes so many things easier, I'm not sure I would have traded it for the pay cuts. That's a selfish perspective — I do think it benefits the whole. But the mid-thirties mammal in me is a little disoriented by the massive paradigm shift. The rate of change is intense, and the quality of life knowing that the future is much less secure is a little unnerving.

I'm certain I'm not alone. So many others are worried about what the technology can bring. On the whole, I'm optimistic, but I think the benefits are outsized for those already positioned to take advantage of the wave. No, Silicon Valley enthusiasts, that is not something everyone can do — it's something for those typically in positions of power, privilege, or luck.

When I hear whispers of Sam Altman's dream of an all-powerful AI with a trillion-parameter context window that can understand us so deeply and recall every conversation we've ever had, I become suspicious. That doesn't sound utopic. Maybe I'm old-fashioned now, but it sounds dangerous. I can't even articulate exactly why it's dangerous, but I feel that maybe forgetting what happened can be good sometimes. Perhaps remembering every little detail or relying so much on an external entity will rob us of something important.

As I muse here, I recall the protests of cab drivers when Google Maps came out. People feared we'd lose a valuable skill — navigation — by relying only on Google Maps. I think the same may be true with LLMs: we might lose our ability to think and write critically by relying on an external entity to do that for us. Reviewing and evaluating something written is a very different skill than thinking and writing something on our own. But maybe we're at the same precipice — just as navigating without a map is no longer useful, perhaps navigating the world without an LLM will soon be the same.

Maybe it's fine to build this technology; maybe we'll no longer need certain skills that AI can handle for us. But I have no doubt that there will emerge other unforeseen complications, just as we've seen with social media affecting our collective mental health for the benefit of advertisers in exchange for a new platform for broadcasting and marketing ourselves.

I'm not so sure that social media has made the world collectively better. Different, sure. Some things are better, yes. But at what cost? One we can't measure. Maybe it'll be the same for LLMs. And the wave has started, so maybe it doesn't matter what the cost is. The tsunami is gonna hit, so best to brace ourselves. Pull out your surfboard and ride it.

image


Nov 3, 2024

7:30AM

Alameda, California