Listing Thumbnail

    ML-Ops Professional Services

     Info
    Management, deployment, and consulting of machine learning lifecycle workloads in the AWS Cloud
    Listing Thumbnail

    ML-Ops Professional Services

     Info

    Overview

    Our goal is to accompany and advise on the evolution of machine learning lifecycle management in the AWS cloud. Our pillars describing our services are listed below:

    Built algorithms and models: In Amazon Web Services one of the definitions of a model in the cloud is the existence of its corresponding algorithm, where this is understood as the mathematical-statistical basis or concept that supports a model. On the other hand, a model is an artifact with the model weights already trained, i.e. the version of the algorithm that satisfies the business criteria. The correct registration of an algorithm translates into the construction of a docker image that allows the execution of the training and deployment process, this being a fundamental part of the service, guaranteeing the use of each solution under the parameters requested by different model orchestrators in the cloud, such as step functions or ML-Ops Framework. On the other hand, if the client wishes to create a model from a pre-drawn algorithm in AWS, we accompany the configuration and execution of the solution to obtain the final model.

    Deploy models: Now a fundamental concept for a data scientist team within a company is to enable a model through an API in such a way that it can have an impact on different levels of your organization. As a team we provide the necessary workloads to execute a deployment process quickly and securely, enabling models in their respective API in minutes.

    Monitoring models: Each process deployed in the cloud requires monitoring to study its behavior, i.e. to evaluate the performance and be alert about potential changes that affect the performance of the solution. For Machine Learning models, we configure monitoring pipelines that allow us to control the performance of the model redictions, changes in the input data, a concept known as data drift, or to study the bias of the deployed model. This is achieved under established and clear workloads for the user, generating outputs in S3, which can then be taken to visualization tools such as Amazon QuickSight.

    Managment pipelines workload: Under the correct registration of the models and the administration of the components that Amazon Sagemaker offers us, we can build custom work pipelines, establishing the logic of steps according to user definitions. For example, configure the re-training of a solution, enable endpoint-multi-model, or generate a training routine under multiple hyperparameters, where the best model is selected and its corresponding API is deployed, this is controlled by a timer.

    Highlights

    • Managing, and executing machine learning lifecycle workloads in the AWS Cloud
    • Deploy and monitoring models in the cloud.

    Details

    Delivery method

    Pricing

    Custom pricing options

    Pricing is based on your specific requirements and eligibility. To get a custom quote for your needs, request a private offer.

    Legal

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Resources

    Support

    Vendor support

    Support Group Email: mlops@morrisopazo.com