Hacker News new | past | comments | ask | show | jobs | submit login

This seems to be a reaction by Google to the Amazon SageMaker release in November: https://aws.amazon.com/sagemaker/

It's great to see that other cloud providers are acknowledging the talent and training data gaps that many large enterprises face when adopting deep learning.

Disclaimer: I work for AWS




Disclosure: I work at Google on Kubeflow

This is an externalization of the service we use at Google internally called Vizier[1], first discussed publicly in June[2].

The idea is that instead of having to build a model yourself, we can use ML (yes, it uses ML to provide ML) to autotune your model and solve your business problem. Basically, instead of having to deal with all the steps in opening an editor, choosing a algo, tweaking, debugging, etc etc, just provide your structured or unstructured data and we'll help you answer your question (which is what customers actually care about).

[1] https://research.google.com/pubs/pub46180.html [2] https://www.youtube.com/watch?v=Z2YL4XJKVpQ


/clarification

AutoML vision is actually built on Google Brain’s proprietary image recognition technology, and Vizier is one of the components of their broader solution. You can see their earlier research announcement here[1]. Sorry to leave off the additional teams that helped in building this!

[1] https://research.googleblog.com/2017/05/using-machine-learni...


So an orthogonal approach here might be crowd-sourced centralized model zoos for better idea sharing across the entire industry. Curious how others see this (automated point solutions crafted to the data set) vs [hopefully soon popular] ONNX model zoos where we have more collaboration across orgs?


Same idea for Sagemaker. Nice to see I get a bunch of instant downvotes - I sometimes wonder why even bother participating in this community.


I didn't downvote you, but I think the comparison to Sagemaker misses the point. This is literally just uploading labeled data and getting a finely tuned classifier out. Hyperparameter tuning is neat, and both Cloud ML Engine and Sagemaker have that, but (correct me if I'm wrong), only AutoML actually handles all of the model architecture decisions itself using transfer learning and learning2learn. See here for details: https://research.googleblog.com/2017/11/automl-for-large-sca...

This significantly reduces the level of expertise required to train models, and the AutoML models outperform "expert" human-created architectures.


Disclosure: I work at Google on Kubeflow

Interesting! I read up on Sagemaker here[1] and didn't see any AutoML style training/tuning features, but you would certainly know better than me :)

[1] https://aws.amazon.com/blogs/aws/sagemaker/


As far as I know, they haven't implemented the HPO on Sagemaker. They're planning to implement it soon. There's still no date announced.


HPO means Hyper Parameter Optimization? Because AutoML has nothing to do with it, AutoML is mostly about the architecture of the model, not about Hyper Parameter Optimization.


model shape is a hyperparameter ;)


for the same logic the researcher is another hyperparameter

(I know you are right, but so many people here think AutoML is exactly the same that the HPO they were doing since long time ago)


That's fair, yes automl is not simply tuning the learning rate and picking your favorite nonlinearity, its fancier than that, but its still tuning hyperparamters.


How SageMaker is similar to AutoML? I don't see any reference in SageMaker about defining the architecture of your model for you based on your data.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: