• 3 Posts
  • 34 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle






  • In its suit, Samsung alleged that Oura had a history of filing patent suits against competitors like Ultrahuman, RingConn, and Circular for “features common to virtually all smart rings,” such as sensors, batteries, and common health metrics.

    The problem isn’t the features, it’s that Samsung is copying the very concept of a smart ring. Oura was the first company to make and patent biometric smart rings. So, yeah, if you make a biometric smart ring without paying them, you’re getting sued. That’s how patents work.

    For the past 30 years, Samsung’s consumer product development strategy has been 75% “copy the competitors, then pay lawyers to fight it out.”






  • Or it’s just the classic Apple “launch some weird shit with a cool interaction model or form factor, but we don’t really know how people will -actually- use this.”

    AppleTV, AppleWatch, Firewire iPod, HomePod, etc. They kick it out, people complain about it, Apple learns the users who adopted it, then they focus the feature set when they better understand the market fit.

    IMHO, it seems like that’s the play here. Heck, they even started with the “pro” during the initial launch, which gives them a very obvious off ramp for a cheaper / more focused non-pro product.





  • I think enterprise needs will ensure that people develops solutions to this.

    Companies can’t have their data creeping out into the public, or even creeping out into other parts of the org. If you’re customer, roadmap, or HR data got into the wrong hands, that could be a disaster.

    Apple, Google, and Microsoft will never get AI into the workplace is AI is sharing confidential enterprise data outside of an organization. And all of these tech companies desperately want their tools to be used in enterprises.


  • Yeah, it a lot of those studies are about stupid stuff like an LLM in-app to look at grammar, or a diffusion model to throw stupid clip art into things. No one gives a shit about that stuff. You can easily just cut and paste from OpenAI’s experience, and get access to more tools there.

    That said, being able to ask an OS to look at one local vectorized DB of texts, images, documents, recognize context, then compose and complete tasks based upon that context. That shit is fucking cool.

    That said, a lot of people haven’t experienced that yet, so when they get asked about “AI,” their responses are framed with what they’ve experienced.

    It’s the “faster horse” analogy. People that don’t know about cars, busses, and trains will ask for a faster horse when you ask them to envision a faster mode of transport.