AWS SDK for C++AWS SDK for C++ Version 1.11.440 |
#include <BatchClient.h>
<fullname>Batch</fullname>
Using Batch, you can run batch computing workloads on the Amazon Web Services Cloud. Batch computing is a common means for developers, scientists, and engineers to access large amounts of compute resources. Batch uses the advantages of the batch computing to remove the undifferentiated heavy lifting of configuring and managing required infrastructure. At the same time, it also adopts a familiar batch computing software approach. You can use Batch to efficiently provision resources, and work toward eliminating capacity constraints, reducing your overall compute costs, and delivering results more quickly.
As a fully managed service, Batch can run batch computing workloads of any scale. Batch automatically provisions compute resources and optimizes workload distribution based on the quantity and scale of your specific workloads. With Batch, there's no need to install or manage batch computing software. This means that you can focus on analyzing results and solving your specific problems instead.
Definition at line 34 of file BatchClient.h.
Definition at line 37 of file BatchClient.h.
Definition at line 41 of file BatchClient.h.
Definition at line 42 of file BatchClient.h.
Aws::Batch::BatchClientConfiguration()
,
nullptr
Initializes client to use DefaultCredentialProviderChain, with default http client factory, and optional client config. If client config is not specified, it will be initialized to default values.
nullptr
,
Aws::Batch::BatchClientConfiguration()
Initializes client to use SimpleAWSCredentialsProvider, with default http client factory, and optional client config. If client config is not specified, it will be initialized to default values.
nullptr
,
Aws::Batch::BatchClientConfiguration()
Initializes client to use specified credentials provider with specified client config. If http client factory is not supplied, the default http client factory will be used
Initializes client to use DefaultCredentialProviderChain, with default http client factory, and optional client config. If client config is not specified, it will be initialized to default values.
Initializes client to use SimpleAWSCredentialsProvider, with default http client factory, and optional client config. If client config is not specified, it will be initialized to default values.
Initializes client to use specified credentials provider with specified client config. If http client factory is not supplied, the default http client factory will be used
Cancels a job in an Batch job queue. Jobs that are in a SUBMITTED
, PENDING
, or RUNNABLE
state are cancelled and the job status is updated to FAILED
.
A PENDING
job is canceled after all dependency jobs are completed. Therefore, it may take longer than expected to cancel a job in PENDING
status.
When you try to cancel an array parent job in PENDING
, Batch attempts to cancel all child jobs. The array parent job is canceled when all child jobs are completed.
Jobs that progressed to the STARTING
or RUNNING
state aren't canceled. However, the API operation still succeeds, even if no job is canceled. These jobs must be terminated with the TerminateJob operation.
nullptr
An Async wrapper for CancelJob that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 123 of file BatchClient.h.
A Callable wrapper for CancelJob that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 114 of file BatchClient.h.
Creates an Batch compute environment. You can create MANAGED
or UNMANAGED
compute environments. MANAGED
compute environments can use Amazon EC2 or Fargate resources. UNMANAGED
compute environments can only use EC2 resources.
In a managed compute environment, Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the launch template that you specify when you create the compute environment. Either, you can choose to use EC2 On-Demand Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is less than a specified percentage of the On-Demand price.
Multi-node parallel jobs aren't supported on Spot Instances.
In an unmanaged compute environment, you can manage your own EC2 compute resources and have flexibility with how you configure your compute resources. For example, you can use custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance AMI specification. For more information, see container instance AMIs in the Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that's associated with it. Then, launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS container instance in the Amazon Elastic Container Service Developer Guide.
To create a compute environment that uses EKS resources, the caller must have permissions to call eks:DescribeCluster
.
Batch doesn't automatically upgrade the AMIs in a compute environment after it's created. For example, it also doesn't update the AMIs in your compute environment when a newer version of the Amazon ECS optimized AMI is available. You're responsible for the management of the guest operating system. This includes any updates and security patches. You're also responsible for any additional application software or utilities that you install on the compute resources. There are two ways to use a new AMI for your Batch jobs. The original method is to complete these steps:
Create a new compute environment with the new AMI.
Add the compute environment to an existing job queue.
Remove the earlier compute environment from your job queue.
Delete the earlier compute environment.
In April 2022, Batch added enhanced support for updating compute environments. For more information, see Updating compute environments. To use the enhanced updating of compute environments to update AMIs, follow these rules:
Either don't set the service role (serviceRole
) parameter or set it to the AWSBatchServiceRole service-linked role.
Set the allocation strategy (allocationStrategy
) parameter to BEST_FIT_PROGRESSIVE
, SPOT_CAPACITY_OPTIMIZED
, or SPOT_PRICE_CAPACITY_OPTIMIZED
.
Set the update to latest image version (updateToLatestImageVersion
) parameter to true
. The updateToLatestImageVersion
parameter is used when you update a compute environment. This parameter is ignored when you create a compute environment.
Don't specify an AMI ID in imageId
, imageIdOverride
(in ec2Configuration
), or in the launch template (launchTemplate
). In that case, Batch selects the latest Amazon ECS optimized AMI that's supported by Batch at the time the infrastructure update is initiated. Alternatively, you can specify the AMI ID in the imageId
or imageIdOverride
parameters, or the launch template identified by the LaunchTemplate
properties. Changing any of these properties starts an infrastructure update. If the AMI ID is specified in the launch template, it can't be replaced by specifying an AMI ID in either the imageId
or imageIdOverride
parameters. It can only be replaced by specifying a different launch template, or if the launch template version is set to $Default
or $Latest
, by setting either a new default version for the launch template (if $Default
) or by adding a new version to the launch template (if $Latest
).
If these rules are followed, any update that starts an infrastructure update causes the AMI ID to be re-selected. If the version
setting in the launch template (launchTemplate
) is set to $Latest
or $Default
, the latest or default version of the launch template is evaluated up at the time of the infrastructure update, even if the launchTemplate
wasn't updated.
nullptr
An Async wrapper for CreateComputeEnvironment that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 225 of file BatchClient.h.
A Callable wrapper for CreateComputeEnvironment that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 216 of file BatchClient.h.
Creates an Batch job queue. When you create a job queue, you associate one or more compute environments to the queue and assign an order of preference for the compute environments.
You also set a priority to the job queue that determines the order that the Batch scheduler places jobs onto its associated compute environments. For example, if a compute environment is associated with more than one job queue, the job queue with a higher priority is given preference for scheduling jobs to that compute environment.
nullptr
An Async wrapper for CreateJobQueue that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 257 of file BatchClient.h.
A Callable wrapper for CreateJobQueue that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 248 of file BatchClient.h.
nullptr
An Async wrapper for CreateSchedulingPolicy that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 282 of file BatchClient.h.
A Callable wrapper for CreateSchedulingPolicy that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 273 of file BatchClient.h.
Deletes an Batch compute environment.
Before you can delete a compute environment, you must set its state to DISABLED
with the UpdateComputeEnvironment API operation and disassociate it from any job queues with the UpdateJobQueue API operation. Compute environments that use Fargate resources must terminate all active jobs on that compute environment before deleting the compute environment. If this isn't done, the compute environment enters an invalid state.
nullptr
An Async wrapper for DeleteComputeEnvironment that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 313 of file BatchClient.h.
A Callable wrapper for DeleteComputeEnvironment that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 304 of file BatchClient.h.
Deletes the specified job queue. You must first disable submissions for a queue with the UpdateJobQueue operation. All jobs in the queue are eventually terminated when you delete a job queue. The jobs are terminated at a rate of about 16 jobs each second.
It's not necessary to disassociate compute environments from a queue before submitting a DeleteJobQueue
request.
nullptr
An Async wrapper for DeleteJobQueue that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 343 of file BatchClient.h.
A Callable wrapper for DeleteJobQueue that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 334 of file BatchClient.h.
Deletes the specified scheduling policy.
You can't delete a scheduling policy that's used in any job queues.
nullptr
An Async wrapper for DeleteSchedulingPolicy that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 369 of file BatchClient.h.
A Callable wrapper for DeleteSchedulingPolicy that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 360 of file BatchClient.h.
Deregisters an Batch job definition. Job definitions are permanently deleted after 180 days.
nullptr
An Async wrapper for DeregisterJobDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 395 of file BatchClient.h.
A Callable wrapper for DeregisterJobDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 386 of file BatchClient.h.
{}
)
const
Describes one or more of your compute environments.
If you're using an unmanaged compute environment, you can use the DescribeComputeEnvironment
operation to determine the ecsClusterArn
that you launch your Amazon ECS container instances into.
nullptr
,
{}
An Async wrapper for DescribeComputeEnvironments that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 424 of file BatchClient.h.
{}
)
const
A Callable wrapper for DescribeComputeEnvironments that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 415 of file BatchClient.h.
{}
)
const
Describes a list of job definitions. You can specify a status
(such as ACTIVE
) to only return job definitions that match that status.
nullptr
,
{}
An Async wrapper for DescribeJobDefinitions that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 451 of file BatchClient.h.
{}
)
const
A Callable wrapper for DescribeJobDefinitions that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 442 of file BatchClient.h.
{}
)
const
nullptr
,
{}
An Async wrapper for DescribeJobQueues that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 476 of file BatchClient.h.
{}
)
const
A Callable wrapper for DescribeJobQueues that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 467 of file BatchClient.h.
nullptr
An Async wrapper for DescribeJobs that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 501 of file BatchClient.h.
A Callable wrapper for DescribeJobs that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 492 of file BatchClient.h.
nullptr
An Async wrapper for DescribeSchedulingPolicies that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 527 of file BatchClient.h.
A Callable wrapper for DescribeSchedulingPolicies that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 518 of file BatchClient.h.
Provides a list of the first 100 RUNNABLE
jobs associated to a single job queue.
nullptr
An Async wrapper for GetJobQueueSnapshot that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 553 of file BatchClient.h.
A Callable wrapper for GetJobQueueSnapshot that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 544 of file BatchClient.h.
{}
)
const
Returns a list of Batch jobs.
You must specify only one of the following items:
A job queue ID to return a list of jobs in that job queue
A multi-node parallel job ID to return a list of nodes for that job
An array job ID to return a list of the children for that job
You can filter the results by job status with the jobStatus
parameter. If you don't specify a status, only RUNNING
jobs are returned.
nullptr
,
{}
An Async wrapper for ListJobs that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 584 of file BatchClient.h.
{}
)
const
A Callable wrapper for ListJobs that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 575 of file BatchClient.h.
{}
)
const
nullptr
,
{}
An Async wrapper for ListSchedulingPolicies that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 609 of file BatchClient.h.
{}
)
const
A Callable wrapper for ListSchedulingPolicies that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 600 of file BatchClient.h.
nullptr
An Async wrapper for ListTagsForResource that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 637 of file BatchClient.h.
A Callable wrapper for ListTagsForResource that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 628 of file BatchClient.h.
nullptr
An Async wrapper for RegisterJobDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 662 of file BatchClient.h.
A Callable wrapper for RegisterJobDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 653 of file BatchClient.h.
Submits an Batch job from a job definition. Parameters that are specified during SubmitJob override parameters defined in the job definition. vCPU and memory requirements that are specified in the resourceRequirements
objects in the job definition are the exception. They can't be overridden this way using the memory
and vcpus
parameters. Rather, you must specify updates to job definition parameters in a resourceRequirements
object that's included in the containerOverrides
parameter.
Job queues with a scheduling policy are limited to 500 active fair share identifiers at a time.
Jobs that run on Fargate resources can't be guaranteed to run for more than 14 days. This is because, after 14 days, Fargate resources might become unavailable and job might be terminated.
nullptr
An Async wrapper for SubmitJob that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 699 of file BatchClient.h.
A Callable wrapper for SubmitJob that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 690 of file BatchClient.h.
Associates the specified tags to a resource with the specified resourceArn
. If existing tags on a resource aren't specified in the request parameters, they aren't changed. When a resource is deleted, the tags that are associated with that resource are deleted as well. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren't supported.
nullptr
An Async wrapper for TagResource that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 730 of file BatchClient.h.
A Callable wrapper for TagResource that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 721 of file BatchClient.h.
Terminates a job in a job queue. Jobs that are in the STARTING
or RUNNING
state are terminated, which causes them to transition to FAILED
. Jobs that have not progressed to the STARTING
state are cancelled.
nullptr
An Async wrapper for TerminateJob that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 758 of file BatchClient.h.
A Callable wrapper for TerminateJob that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 749 of file BatchClient.h.
nullptr
An Async wrapper for UntagResource that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 783 of file BatchClient.h.
A Callable wrapper for UntagResource that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 774 of file BatchClient.h.
nullptr
An Async wrapper for UpdateComputeEnvironment that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 808 of file BatchClient.h.
A Callable wrapper for UpdateComputeEnvironment that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 799 of file BatchClient.h.
nullptr
An Async wrapper for UpdateJobQueue that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 833 of file BatchClient.h.
A Callable wrapper for UpdateJobQueue that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 824 of file BatchClient.h.
nullptr
An Async wrapper for UpdateSchedulingPolicy that queues the request into a thread executor and triggers associated callback when operation has finished.
Definition at line 858 of file BatchClient.h.
A Callable wrapper for UpdateSchedulingPolicy that returns a future to the operation so that it can be executed in parallel to other requests.
Definition at line 849 of file BatchClient.h.
Definition at line 865 of file BatchClient.h.