Question is, do I downvote the crappy product because I hate it, or do I upvote it so other people can learn about it and hate it with me?
Question is, do I downvote the crappy product because I hate it, or do I upvote it so other people can learn about it and hate it with me?
I’m just playing to remind myself what America’s dumb right wingers will be mad about when the HBO show drops the new season.
Looks like we’re seeing the impact of inflation + tariffs.
The OG Switch was $300 in 2017. This console would be about $350 if you adjusted for inflation.
Plot twist. He’s wrong because he was down with it.
In its suit, Samsung alleged that Oura had a history of filing patent suits against competitors like Ultrahuman, RingConn, and Circular for “features common to virtually all smart rings,” such as sensors, batteries, and common health metrics.
The problem isn’t the features, it’s that Samsung is copying the very concept of a smart ring. Oura was the first company to make and patent biometric smart rings. So, yeah, if you make a biometric smart ring without paying them, you’re getting sued. That’s how patents work.
For the past 30 years, Samsung’s consumer product development strategy has been 75% “copy the competitors, then pay lawyers to fight it out.”
Probably not the worst idea if you’ve been diagnosed with heart disease.
Samsung’s just copying Apple yet again.
People are arguing about autopilot being disabled during the drive, but even if it was, the emergency braking system should tried to do something.
This guy clamps
Or it’s just the classic Apple “launch some weird shit with a cool interaction model or form factor, but we don’t really know how people will -actually- use this.”
AppleTV, AppleWatch, Firewire iPod, HomePod, etc. They kick it out, people complain about it, Apple learns the users who adopted it, then they focus the feature set when they better understand the market fit.
IMHO, it seems like that’s the play here. Heck, they even started with the “pro” during the initial launch, which gives them a very obvious off ramp for a cheaper / more focused non-pro product.
At least one of those guys is able to ship a product that does what it was advertised to do.
The problem with the Vision Pro is that no one wants to pay $4000 for what it does.
The Vision Pro is a cool solution in search of a user need.
Voice control is a user need that Apple struggles to deliver solutions for.
Apple’s big problem is that Apple intelligence’s two most interesting features, contextual awareness + Siri finally having deep integration, never got reliable enough to get into public or developer beta this year.
That was the thing everyone wanted, but they basically only got LLM summaries / drafting, and image generators. They got the stuff that is easy to make.
I think enterprise needs will ensure that people develops solutions to this.
Companies can’t have their data creeping out into the public, or even creeping out into other parts of the org. If you’re customer, roadmap, or HR data got into the wrong hands, that could be a disaster.
Apple, Google, and Microsoft will never get AI into the workplace is AI is sharing confidential enterprise data outside of an organization. And all of these tech companies desperately want their tools to be used in enterprises.
Yeah, it a lot of those studies are about stupid stuff like an LLM in-app to look at grammar, or a diffusion model to throw stupid clip art into things. No one gives a shit about that stuff. You can easily just cut and paste from OpenAI’s experience, and get access to more tools there.
That said, being able to ask an OS to look at one local vectorized DB of texts, images, documents, recognize context, then compose and complete tasks based upon that context. That shit is fucking cool.
That said, a lot of people haven’t experienced that yet, so when they get asked about “AI,” their responses are framed with what they’ve experienced.
It’s the “faster horse” analogy. People that don’t know about cars, busses, and trains will ask for a faster horse when you ask them to envision a faster mode of transport.
Why can’t it work?
I work on AI systems that integrate into other apps and make contextual requests. That’s the big feature that Apple hasn’t launched, and it’s very much a problem that others have solved before.
The new models are being fixed by “nut clamping”
large flaps of skin
It’s not available to the public