You are here Home » Tech » Gadgets » You getting an AI Smartphone or AI Laptop? Sure you’re Ready for that Level of AI Invasion?

You getting an AI Smartphone or AI Laptop? Sure you’re Ready for that Level of AI Invasion?

The AI Invasion - Are We Ready?: The Shiny New Thing We Don’t Quite Need Yet

by Felix Omondi

Alright, picture this: you’re at your local tech store, eyeing the latest shiny gadget. The salesperson, who’s probably trying to hit their monthly quota, pitches the newest smartphone with built-in AI. “It’s got a Large Language Model,” they say. “It’s the future!” But is it really? Or is it just another gimmick to get us to part with our hard-earned cash?

AI Assistants: Overkill or Over-hyped?

Let’s be honest. We’ve had digital assistants for years now: Siri, Alexa, Cortana, and the trusty “Okay Google.” They set reminders, answer random trivia questions, and even tell us jokes. So why are we suddenly throwing AI into the mix? Do we really need a Large Language Model (LLM) on our phones? Spoiler alert: probably not. 

Imagine trying to converse with a sophisticated LLM on your smartphone. You’re in the middle of typing a complex request and autocorrect decides to go rogue. Instead of asking for the best Italian restaurant nearby, you’ve just inquired about “resting Italian beards.” Smooth.

The Great Data Dilemma

Every interaction with these AI models is a goldmine of data. And companies love data. It’s like digital candy. But here’s the catch: not all data is created equal. Remember Google’s AI debacle? They tried to make web searches smarter with AI, but the results were…let’s just say, less than ideal. Training an AI on unscreened data is like teaching a parrot phrases from a reality TV show. Entertaining? Sure. Useful? Not so much.

Now, imagine millions of Gen-Z (the type who are about to disrupt Kenya’s political scene) with these AI-equipped smartphones. They’re bored, they’re creative, and they’re determined to jailbreak these AIs in the most extravagant ways possible. If you thought AI “hallucinations” were bad now, just wait. We’re talking next-level weirdness.

Screening the Stream: A Herculean Task

Removing Reddit text from AI training data is one thing. Screening a never-ending stream of data from millions of devices? That’s a task for the gods. And even if some tech guru claims they can fix the infamous AI hallucinations, don’t buy it. These aren’t bugs you can squash. They’re features, born from the very flexibility that makes LLMs what they are. Want your AI to be creative? Get ready for it to also be a bit of a loose cannon.

Humans: The Real Culprits

Let’s face it. The problem isn’t AI; it’s us. We create the data that trains AI, complete with all our quirks and biases. We’re the ones who get starry-eyed over new tech we barely understand. Remember how the internet turned out? Yeah, we jumped in headfirst, hoping to figure it out along the way. Spoiler alert: we’re still figuring it out.

The Shiny New Thing Syndrome

So why do we keep chasing these AI-equipped smartphones? Simple. We love new, shiny things. Even if they’re more about glitz than practicality. The real kicker? We don’t need an AI that can compose Shakespearean sonnets. We need one that can remind us to pick up milk on the way home. But hey, as long as it’s shiny and new, we’ll probably buy it anyway.

You may also like