Unveiling the Rabbit R1: A Recap of the Launch Event
Rabbit recently hosted its launch event for the Rabbit R1 at the iconic TWA Hotel in New York City’s JFK Airport.
The event kicked off with rabbit-like nibbling on hors d’oeuvres. CEO Jesse Lyu then took the stage to present the R1, emphasizing the company’s approach of combining simplicity with cool design in hardware and employing server-side processing for user requests. Lyu highlighted the R1’s unique reliance on a Large Action Model (LAM) over a Large Language Model (LLM), aiming to shift from app-based interactions to action-based ones.
Live demos of the R1 impressed attendees, despite slow network conditions. The device showcased weather updates, spreadsheet manipulation, language translation, and music streaming. However, challenges arose with food ordering via DoorDash, attributed to backend issues. The R1’s capabilities also extend to Uber hailing and image generation through Midjourney.
Unlike traditional AI integrations, the R1 doesn’t rely on external SDKs or APIs but directly interacts with app actions. Lyu stressed the time-intensive process of training actions for multiple apps, explaining the device’s limited app compatibility at launch.
The Rabbit R1 itself, designed by Teenage Engineering, features a compact, plastic body with a display, scroll wheel, action button, camera, and USB-C port. Attendees noted a nostalgic yet slightly impractical design, particularly with the scroll wheel’s operation and the absence of voice activation.
The Rabbit R1 is now available for $199, supporting 4G LTE and Wi-Fi connectivity. Despite its innovative approach, questions remain about its ability to replace smartphone AI assistants convincingly.
Newer Articles
- X’s New Terms of Service: Your Posts to Train AI, Fines for Heavy Usage, and Changes to Blocking
- AMD Next-Gen Processors Arriving in November
- Tips for a Smooth Sleep during Daylight Saving Time transition