Skip to main content

Google Cloud launches AutoML Tables, Video Intelligence, and Vision Edge

One focus for Google Cloud is increasing customer adoption of AI by offering a wide range of machine learning services at all levels. Launched last year, Cloud AutoML is aimed at developers with limited expertise that still want to train and leverage models.

Users with Cloud AutoML can train, evaluate, improve, and deploy models through a graphical UI that simply involves uploading your own data. It’s currently available in beta for Vision, Natural Language, and Translation. At Cloud Next, Google announced new products and features across the line.

AutoML Tables

The newest beta product lets you create machine learning models from datasets. AutoML Tables can find predictive insights and patterns from structured data that enterprises today already generate.

Google touts that no coding is needed and a development timeline of days instead of weeks. Source data is easily transferable from BigQuery or other GCP storage services.

The codeless interface guides you through the full end-to-end machine learning lifecycle, making it easy for anyone on your team—whether data scientist, analyst, or developer—to build models and reliably incorporate them into broader applications.

AutoML Video Intelligence

Cloud Next 2019 also sees the debut of AutoML Video in beta to create custom models that automatically classify video content with labels defined by developers.

This means media and entertainment businesses can simplify tasks like automatically removing commercials or creating highlight reels, and other industries can apply it to their own specific video analysis needs—for example, better understanding traffic patterns or overseeing manufacturing processes.

AutoML Vision Edge

Image recognition models can now be trained and deployed on premise or remote edge devices that often have unreliable connectivity or high latency. This includes connected sensors or cameras, with Vision Edge able to take advantage of Edge TPUs for faster inference.

Updates

  • Object detection (beta) in the full AutoML Vision can identify the position of items within an image, and in context with one another. Google notes a pedestrian walking in a crosswalk as an example.
  • AutoML Natural Language picks up custom entity extraction (beta) to identify medical terms, contractual clauses, and other entities within documents, and label them based on company-specific keywords and phrases
  • Additionally, Custom sentiment analysis (beta) can understand the overall opinion, feeling, or attitude in analyzed piece of text in line with your domain-specific sentiment scores.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Check out 9to5Google on YouTube for more news:

Comments

Author

Avatar for Abner Li Abner Li

Editor-in-chief. Interested in the minutiae of Google and Alphabet. Tips/talk: abner@9to5g.com