Inicio / BLOGS / INGLÉS / Tech Crunch / Amazon Elastic Inference will reduce deep learning costs by ~75%1 min read

Amazon Elastic Inference will reduce deep learning costs by ~75%1 min read

Amazon Web Services today announced Amazon Elastic Inference, a new service that lets customers attach GPU-powered inference acceleration to any Amazon EC2 instance and reduces deep learning costs by up to 75 percent.

“What we see typically is that the average utilization of these P3 instances GPUs are about 10 to 30 percent, which is pretty wasteful with elastic inference. You don’t have to waste all that costs and all that GPU,” AWS chief executive Andy Jassy said onstage at the AWS re:Invent conference earlier today. “[Amazon Elastic Inference] is a pretty significant game changer in being able to run inference much more cost-effectively.”

Amazon Elastic Inference will also be available for Amazon SageMaker notebook instances and endpoints, “bringing acceleration to built-in algorithms and to deep learning environments,” the company wrote in a blog post. It will support machine learning frameworks TensorFlow, Apache MXNet and ONNX.

It’s available in three sizes:

  • eia1.medium: 8 TeraFLOPs of mixed-precision performance.
  • eia1.large: 16 TeraFLOPs of mixed-precision performance.
  • eia1.xlarge: 32 TeraFLOPs of mixed-precision performance.

Dive deeper into the new service here.

more AWS re:Invent 2018 coverage

Source: Tech Crunch

Compartir

Acerca de grouzmain

TE INTERESA

Anagrama-Grouz-Startup-Growers-Default

A Post-Earnings Chat With Okta On SaaS And Enterprise Adoption

Okta reported its third quarter fiscal 2019 earnings last week. The stock market liked what …

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *