Hacker News new | past | comments | ask | show | jobs | submit login

Hey HN,

After almost 3 years of R&D our team of engineers and multi-disciplinary fitness professionals have developed the first OS for the human body - Movement OS - using a set of computer vision and deep learning models in addition to proprietary UX/UI technologies that come to life through Altis, an AI Personal Trainer that plugs into any screen through a compact console.

Altis sees you, understands you, and personally instructs you in the most interactive and intelligent fitness experience ever – at a fraction of the cost of a personal trainer.

Website: https://altis.ai Constantine




Can you share more details about the computer vision model and what makes it unique compared to some of the other vision-enabled connected fitness devices on the market?


Altis set out 2 years ago to build a computer vision model of the human skeleton that is accurate enough for a nuanced, complex movement instruction application that we envisioned in the Altis product and user experience. Our threshold and standard was that of human instruction, which of course relies on our complex sensory and neurological organs to process visual information and provide feedback.

All competitors and developers working in this particular subfield of computer vision - human kinematics and biomechanics - use simpler 2D pose estimation models that rely on single cameras or dual versions in close proximity with limited 3D point cloud capabilities. This drastically limits their abilities to create a data set that can be used for corrective purposes by a deep learning model.

Altis uses two ToF (Time of Flight) cameras spaced 20” apart on an elegant soundbar-sized device to capture the human body and motion in real time using no sensors and at any relative body angle to the device.

Our pose estimation model uses several sophisticated, novel techniques to process the point cloud and create an accurate skeleton in 3D space that serves as both the visual interface for our application and the predictive, deep learning computer vision model needed for accurate movement correction. This predictive capability goes far beyond any product on the market that purports itself to offer “form correction” capabilities with any level of accuracy or intelligence.

Our team is composed of many talented AI and computer vision engineers, including our CIO Constantin Goltzev who cofounded the Neuromation MLOps platform and our head of AI Andrew Rabinovich, a leading researcher in the fields of AI and computer vision with names such as Headroom, Google (engineering), Magic Leap (Head of AI) under his belt.


well done




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: