Apple would never make this mistake


Hey Reader,

Another week, another implosion from a handheld / wearable AI company.

Last week we talked about the extremely negative reception for the Humane Ai Pin: one popular tech commentator, Marques Brownlee, suggested it was the worst product he’d ever reviewed.

Today it’s the Rabbit R1, which absolutely tanked in The Verge’s test drive. Here’s their review video, including lots of cringe-inducing image-recognition and logic failures.

video preview

Three months ago I detailed my skepticism about both these gadgets, especially the Rabbit R1, which strikes me (and The Verge reviewer) as almost scam-level in its inability to deliver on even its most basic promises. Here’s a representative quote from The Verge’s review:

“The long and short of it is this: all the coolest, most ambitious, most interesting, and differentiating things about the R1 don’t work. They mostly don’t even exist. When I first got a demo of the device at CES, founder and CEO Jesse Lyu blamed the Wi-Fi for the fact that his R1 couldn’t do most of the things he’d just said it could do. Now I think the Wi-Fi might have been fine.”

So, the first few major AI handheld / wearable products have been horrendous flops.

Let’s look at what went wrong.

‘How could 100k preorders be wrong?’

When I first criticized the Rabbit R1 back in February, I received a couple of reader responses noting that the product had been incredibly popular - selling out multiple rounds of preorders - after its founder demonstrated it in a keynote speech at the Consumer Electronics Show.

The implication was that, since people were buying it, my criticism was likely wrong, and I was likely overlooking something. (Possibly relevant side-note: People also say this to me a lot about crypto.)

Now, I think it is safe to say that most of those 100k preorder customers will be disappointed. Where did they go wrong?

I think the big-picture takeaway here is that AI is so new and fast-moving that it can be easy to be swept up in hype, in a way that potentially makes you, as a consumer, susceptible to people who exaggerate, lie, or don’t fulfill their promises. I don’t think there was Theranos- or FTX-level wrongdoing with the R1 (it’s just very hard to make a good AI hardware product, for reasons I’ll detail in a moment), but you can see how this pattern might repeat itself over and over again – big new idea, seems theoretically possible, lots of hype and money going in ... crappy product coming out.

This is the polar opposite of what Apple does. They do amazing keynote product reveals and then they deliver amazing hardware and software.

The Rabbit R1 and Human Ai Pin both evoked that Apple style when they revealed their products. Unlike Apple, they both launched duds.

Consumers need to be cautious in the early days of AI-powered hardware. We are not dealing with established winners or brilliant designers/inventors like Steve Jobs. The entire market deserves a lot of skepticism - or, at the very least, you should only buy AI hardware if you’re OK with playing with it for 20 minutes and then allowing it to collect dust in a desk drawer until you someday donate it to Goodwill.

(Speaking of which, the Meta AI Ray-Bans that my Incubator students forced me to buy are currently sitting in a dark corner of my mudroom, never to be worn again...)

Why doesn’t the tech work? (an under-the-hood analysis)

One limitation of the reviews I linked above is that they are approaching things from a consumer-technology standpoint. They want to explain whether you should buy the device for yourself or your kid for Christmas. By contrast, what I want to do is look a little deeper into why we are now seeing a pattern of AI-powered hardware failing at the consumer level.

The first issue is that AI is still at the “early adopter” point on the tech adoption curve, where you need to accept some bugs and flaws to be a power user. (You are probably in this group if you’re using ChatGPT and other AI tools on a regular basis. They don’t always work and you’re OK with that.)

The consumer product market is much different and requires your product to be ready for an early majority or late majority audience. That means you can’t ship something that’s riddled with bugs. Even the performance of GPT-4, which is the best LLM on the market, is not sufficient for “late majority” adoption.

Beyond the obvious bugs, though, the flaw that plagues both the Rabbit R1 and the Humane Ai Pin is very simple, and yet very difficult to solve right now:

AI is just too slow for use in everyday consumer tech right now.

What’s interesting is that these two devices failed even though they tried to solve the speed problem in opposite ways.

Failure Type 1 – The Cloud Is Too Slow

The Humane Ai Pin sends all its data to the cloud and back on a cellular data connection. This provides decent results in terms of how “smart” the large language model is. However, it is dreadfully slow compared to Siri, Alexa or anything you’d do with a smartphone app. So, it more or less passes the “quality” test but fails the “speed” test. It also forces you to pay for a $20+/month data plan just for your Pin.

Failure Type 2 Local LLMs Are Much Less Smart

The Rabbit R1 attempts to be a self-contained device, which means it runs its own (smaller) large language model right on the physical hardware itself. In theory, this would allow you to perform some “AI chatbot” activity more quickly since you are never going out to the internet and waiting for the response from the LLM. (Note: Apparently the R1 does use GPT or other internet-based LLMs in some scenarios, but the goals is to use the internal hardware most of the time.)

The tradeoff is that LLMs on local hardware have far fewer parameters, aren’t trained as well, and just can’t do the same job as GPT-4, Claude 3 or other top-of-the-line stuff. And you get videos like the one at the top of this email, where the user takes a picture of a dog toy and the R1 thinks it is a tomato and/or a red bell pepper and then confidently recommends eating it.

So, Rabbit R1 is more or less on the right track of trying to do more locally to speed up the experience, but the local LLM isn’t really useful for much of anything, and people are just better off using their phones.

What’s next?

One thing I am very excited about is the prospect of local LLMs running on iPhones and more advanced smartphones. They have a lot of processing power and are already hooked up to our daily life, so there’s interesting potential there. And, of course, Siri and Alexa will be AI-powered soon, and we’ll see huge improvements in the quality of their responses as they slowly converge with the level of “humanity” that we see from ChatGPT.

Until next time,

– Rob Howard
Founder of Innovating with AI


PS. On Monday I asked: Do you want to become an AI consultant?

And 350+ of you said yes. I'm humbled by how thoughtful all of you were in filling out the application. It's pretty clear there's strong interest here, so I've got a meeting planned with the team to get going on creating something exceptional for you.

Like we've done before, we'll be reaching out to those of you who gave us the feedback we need to guide IWAI in the direction it needs to go. That means that once it's ready, you'll hear about it before anyone else.

If this is the first time you're hearing about it - it's not too late to help us build the next educational product/program. Click here to tell me about your AI consulting goals.

Innovating with AI

We help entrepreneurs and executives harness the power of AI.

Read more from Innovating with AI

Hey Reader, Everyone's rolling out AI features, but they're not all a hit. I've seen so many ads for AI-powered tech in mainstream media lately – it seemed like half the commercials during the recent NBA Finals were AI-related. So today, Nyasha helps us dig into one that was not a hit – the McDonald's AI ordering system – and how their competitors are taking the lead on faster, smarter verbal ordering tech. ••• But first, a new tool and free trial for IWAI readers from our friends at Zupyak....

Hey Reader, We're pumped about OpenAI's Sora model, but so far we've only seen a few demos. That's about to change, as a new batch of short films drops at the Tribeca Film Festival this week. ••• But first, a new tool you might like if you're building AI products or processes – our friends at WebAI are launching Navigator, a no-code environment empowers any user to train, build and deploy AI solutions efficiently with ease. But they're only sharing it with folks who join their early access...

Hello Reader, Google’s artificial intelligence search feature left users scratching their heads after a number of outrageous and incorrect search responses went viral. The company recently introduced AI Overview, an advanced search feature designed to provide quick, summarized answers to users' queries using AI. However, instead of enhancing the search experience, users saw responses advising people to add non-toxic glue to pizza and eat rocks for nutrition. These errors sparked an outcry...