Table of Contents
DataPipeline.
Client
¶A low-level client representing AWS Data Pipeline:
client = session.create_client('datapipeline')
These are the available methods:
activate_pipeline()
add_tags()
can_paginate()
create_pipeline()
deactivate_pipeline()
delete_pipeline()
describe_objects()
describe_pipelines()
evaluate_expression()
generate_presigned_url()
get_paginator()
get_pipeline_definition()
get_waiter()
list_pipelines()
poll_for_task()
put_pipeline_definition()
query_objects()
remove_tags()
report_task_progress()
report_task_runner_heartbeat()
set_status()
set_task_status()
validate_pipeline_definition()
activate_pipeline
(**kwargs)¶Validates the specified pipeline and starts processing pipeline tasks. If the pipeline does not pass validation, activation fails.
If you need to pause the pipeline to investigate an issue with a component, such as a data source or script, call DeactivatePipeline .
To activate a finished pipeline, modify the end date for the pipeline and then activate it.
Request Syntax
response = client.activate_pipeline(
pipelineId='string',
parameterValues=[
{
'id': 'string',
'stringValue': 'string'
},
],
startTimestamp=datetime(2015, 1, 1)
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {}
Response Structure
|
Adds or modifies tags for the specified pipeline.
Request Syntax
response = client.add_tags(
pipelineId='string',
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {}
Response Structure
|
can_paginate
(operation_name)¶Check if an operation can be paginated.
Parameters: | operation_name (string) -- The operation name. This is the same name
as the method name on the client. For example, if the
method name is create_foo , and you'd normally invoke the
operation as client.create_foo(**kwargs) , if the
create_foo operation can be paginated, you can use the
call client.get_paginator("create_foo") . |
---|---|
Returns: | True if the operation can be paginated,
False otherwise. |
create_pipeline
(**kwargs)¶Creates a new, empty pipeline. Use PutPipelineDefinition to populate the pipeline.
Request Syntax
response = client.create_pipeline(
name='string',
uniqueId='string',
description='string',
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'pipelineId': 'string'
}
Response Structure
|
deactivate_pipeline
(**kwargs)¶Deactivates the specified running pipeline. The pipeline is set to the DEACTIVATING
state until the deactivation process completes.
To resume a deactivated pipeline, use ActivatePipeline . By default, the pipeline resumes from the last completed execution. Optionally, you can specify the date and time to resume the pipeline.
Request Syntax
response = client.deactivate_pipeline(
pipelineId='string',
cancelActive=True|False
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {}
Response Structure
|
delete_pipeline
(**kwargs)¶Deletes a pipeline, its pipeline definition, and its run history. AWS Data Pipeline attempts to cancel instances associated with the pipeline that are currently being processed by task runners.
Deleting a pipeline cannot be undone. You cannot query or restore a deleted pipeline. To temporarily pause a pipeline instead of deleting it, call SetStatus with the status set to PAUSE
on individual components. Components that are paused by SetStatus can be resumed.
Request Syntax
response = client.delete_pipeline(
pipelineId='string'
)
Parameters: | pipelineId (string) -- [REQUIRED] The ID of the pipeline. |
---|---|
Returns: | None |
describe_objects
(**kwargs)¶Gets the object definitions for a set of objects associated with the pipeline. Object definitions are composed of a set of fields that define the properties of the object.
Request Syntax
response = client.describe_objects(
pipelineId='string',
objectIds=[
'string',
],
evaluateExpressions=True|False,
marker='string'
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'pipelineObjects': [
{
'id': 'string',
'name': 'string',
'fields': [
{
'key': 'string',
'stringValue': 'string',
'refValue': 'string'
},
]
},
],
'marker': 'string',
'hasMoreResults': True|False
}
Response Structure
|
describe_pipelines
(**kwargs)¶Retrieves metadata about one or more pipelines. The information retrieved includes the name of the pipeline, the pipeline identifier, its current state, and the user account that owns the pipeline. Using account credentials, you can retrieve metadata about pipelines that you or your IAM users have created. If you are using an IAM user account, you can retrieve metadata about only those pipelines for which you have read permissions.
To retrieve the full pipeline definition instead of metadata about the pipeline, call GetPipelineDefinition .
Request Syntax
response = client.describe_pipelines(
pipelineIds=[
'string',
]
)
Parameters: | pipelineIds (list) -- [REQUIRED] The IDs of the pipelines to describe. You can pass as many as 25 identifiers in a single call. To obtain pipeline IDs, call ListPipelines .
|
---|---|
Return type: | dict |
Returns: | Response Syntax{
'pipelineDescriptionList': [
{
'pipelineId': 'string',
'name': 'string',
'fields': [
{
'key': 'string',
'stringValue': 'string',
'refValue': 'string'
},
],
'description': 'string',
'tags': [
{
'key': 'string',
'value': 'string'
},
]
},
]
}
Response Structure
|
evaluate_expression
(**kwargs)¶Task runners call EvaluateExpression
to evaluate a string in the context of the specified object. For example, a task runner can evaluate SQL queries stored in Amazon S3.
Request Syntax
response = client.evaluate_expression(
pipelineId='string',
objectId='string',
expression='string'
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'evaluatedExpression': 'string'
}
Response Structure
|
generate_presigned_url
(ClientMethod, Params=None, ExpiresIn=3600, HttpMethod=None)¶Generate a presigned url given a client, its method, and arguments
Parameters: |
|
---|---|
Returns: | The presigned url |
get_paginator
(operation_name)¶Create a paginator for an operation.
Parameters: | operation_name (string) -- The operation name. This is the same name
as the method name on the client. For example, if the
method name is create_foo , and you'd normally invoke the
operation as client.create_foo(**kwargs) , if the
create_foo operation can be paginated, you can use the
call client.get_paginator("create_foo") . |
---|---|
Raises OperationNotPageableError: | |
Raised if the operation is not
pageable. You can use the client.can_paginate method to
check if an operation is pageable. |
|
Return type: | L{botocore.paginate.Paginator} |
Returns: | A paginator object. |
get_pipeline_definition
(**kwargs)¶Gets the definition of the specified pipeline. You can call GetPipelineDefinition
to retrieve the pipeline definition that you provided using PutPipelineDefinition .
Request Syntax
response = client.get_pipeline_definition(
pipelineId='string',
version='string'
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'pipelineObjects': [
{
'id': 'string',
'name': 'string',
'fields': [
{
'key': 'string',
'stringValue': 'string',
'refValue': 'string'
},
]
},
],
'parameterObjects': [
{
'id': 'string',
'attributes': [
{
'key': 'string',
'stringValue': 'string'
},
]
},
],
'parameterValues': [
{
'id': 'string',
'stringValue': 'string'
},
]
}
Response Structure
|
get_waiter
(waiter_name)¶list_pipelines
(**kwargs)¶Lists the pipeline identifiers for all active pipelines that you have permission to access.
Request Syntax
response = client.list_pipelines(
marker='string'
)
Parameters: | marker (string) -- The starting point for the results to be returned. For the first call, this value should be empty. As long as there are more results, continue to call ListPipelines with the marker value from the previous call to retrieve the next set of results. |
---|---|
Return type: | dict |
Returns: | Response Syntax{
'pipelineIdList': [
{
'id': 'string',
'name': 'string'
},
],
'marker': 'string',
'hasMoreResults': True|False
}
Response Structure
|
poll_for_task
(**kwargs)¶Task runners call PollForTask
to receive a task to perform from AWS Data Pipeline. The task runner specifies which tasks it can perform by setting a value for the workerGroup
parameter. The task returned can come from any of the pipelines that match the workerGroup
value passed in by the task runner and that was launched using the IAM user credentials specified by the task runner.
If tasks are ready in the work queue, PollForTask
returns a response immediately. If no tasks are available in the queue, PollForTask
uses long-polling and holds on to a poll connection for up to a 90 seconds, during which time the first newly scheduled task is handed to the task runner. To accomodate this, set the socket timeout in your task runner to 90 seconds. The task runner should not call PollForTask
again on the same workerGroup
until it receives a response, and this can take up to 90 seconds.
Request Syntax
response = client.poll_for_task(
workerGroup='string',
hostname='string',
instanceIdentity={
'document': 'string',
'signature': 'string'
}
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'taskObject': {
'taskId': 'string',
'pipelineId': 'string',
'attemptId': 'string',
'objects': {
'string': {
'id': 'string',
'name': 'string',
'fields': [
{
'key': 'string',
'stringValue': 'string',
'refValue': 'string'
},
]
}
}
}
}
Response Structure
|
put_pipeline_definition
(**kwargs)¶Adds tasks, schedules, and preconditions to the specified pipeline. You can use PutPipelineDefinition
to populate a new pipeline.
PutPipelineDefinition
also validates the configuration as it adds it to the pipeline. Changes to the pipeline are saved unless one of the following three validation errors exists in the pipeline.
Pipeline object definitions are passed to the PutPipelineDefinition
action and returned by the GetPipelineDefinition action.
Request Syntax
response = client.put_pipeline_definition(
pipelineId='string',
pipelineObjects=[
{
'id': 'string',
'name': 'string',
'fields': [
{
'key': 'string',
'stringValue': 'string',
'refValue': 'string'
},
]
},
],
parameterObjects=[
{
'id': 'string',
'attributes': [
{
'key': 'string',
'stringValue': 'string'
},
]
},
],
parameterValues=[
{
'id': 'string',
'stringValue': 'string'
},
]
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'validationErrors': [
{
'id': 'string',
'errors': [
'string',
]
},
],
'validationWarnings': [
{
'id': 'string',
'warnings': [
'string',
]
},
],
'errored': True|False
}
Response Structure
|
query_objects
(**kwargs)¶Queries the specified pipeline for the names of objects that match the specified set of conditions.
Request Syntax
response = client.query_objects(
pipelineId='string',
query={
'selectors': [
{
'fieldName': 'string',
'operator': {
'type': 'EQ'|'REF_EQ'|'LE'|'GE'|'BETWEEN',
'values': [
'string',
]
}
},
]
},
sphere='string',
marker='string',
limit=123
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'ids': [
'string',
],
'marker': 'string',
'hasMoreResults': True|False
}
Response Structure
|
Removes existing tags from the specified pipeline.
Request Syntax
response = client.remove_tags(
pipelineId='string',
tagKeys=[
'string',
]
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {}
Response Structure
|
report_task_progress
(**kwargs)¶Task runners call ReportTaskProgress
when assigned a task to acknowledge that it has the task. If the web service does not receive this acknowledgement within 2 minutes, it assigns the task in a subsequent PollForTask call. After this initial acknowledgement, the task runner only needs to report progress every 15 minutes to maintain its ownership of the task. You can change this reporting time from 15 minutes by specifying a reportProgressTimeout
field in your pipeline.
If a task runner does not report its status after 5 minutes, AWS Data Pipeline assumes that the task runner is unable to process the task and reassigns the task in a subsequent response to PollForTask . Task runners should call ReportTaskProgress
every 60 seconds.
Request Syntax
response = client.report_task_progress(
taskId='string',
fields=[
{
'key': 'string',
'stringValue': 'string',
'refValue': 'string'
},
]
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'canceled': True|False
}
Response Structure
|
report_task_runner_heartbeat
(**kwargs)¶Task runners call ReportTaskRunnerHeartbeat
every 15 minutes to indicate that they are operational. If the AWS Data Pipeline Task Runner is launched on a resource managed by AWS Data Pipeline, the web service can use this call to detect when the task runner application has failed and restart a new instance.
Request Syntax
response = client.report_task_runner_heartbeat(
taskrunnerId='string',
workerGroup='string',
hostname='string'
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'terminate': True|False
}
Response Structure
|
set_status
(**kwargs)¶Requests that the status of the specified physical or logical pipeline objects be updated in the specified pipeline. This update might not occur immediately, but is eventually consistent. The status that can be set depends on the type of object (for example, DataNode or Activity). You cannot perform this operation on FINISHED
pipelines and attempting to do so returns InvalidRequestException
.
Request Syntax
response = client.set_status(
pipelineId='string',
objectIds=[
'string',
],
status='string'
)
Parameters: |
|
---|---|
Returns: | None |
set_task_status
(**kwargs)¶Task runners call SetTaskStatus
to notify AWS Data Pipeline that a task is completed and provide information about the final status. A task runner makes this call regardless of whether the task was sucessful. A task runner does not need to call SetTaskStatus
for tasks that are canceled by the web service during a call to ReportTaskProgress .
Request Syntax
response = client.set_task_status(
taskId='string',
taskStatus='FINISHED'|'FAILED'|'FALSE',
errorId='string',
errorMessage='string',
errorStackTrace='string'
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {}
Response Structure
|
validate_pipeline_definition
(**kwargs)¶Validates the specified pipeline definition to ensure that it is well formed and can be run without error.
Request Syntax
response = client.validate_pipeline_definition(
pipelineId='string',
pipelineObjects=[
{
'id': 'string',
'name': 'string',
'fields': [
{
'key': 'string',
'stringValue': 'string',
'refValue': 'string'
},
]
},
],
parameterObjects=[
{
'id': 'string',
'attributes': [
{
'key': 'string',
'stringValue': 'string'
},
]
},
],
parameterValues=[
{
'id': 'string',
'stringValue': 'string'
},
]
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'validationErrors': [
{
'id': 'string',
'errors': [
'string',
]
},
],
'validationWarnings': [
{
'id': 'string',
'warnings': [
'string',
]
},
],
'errored': True|False
}
Response Structure
|
The available paginators are:
DataPipeline.Paginator.describe_objects
DataPipeline.Paginator.list_pipelines
DataPipeline.Paginator.query_objects
DataPipeline.Paginator.
describe_objects
¶paginator = client.get_paginator('describe_objects')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from DataPipeline.Client.describe_objects()
.
Request Syntax
response_iterator = paginator.paginate(
pipelineId='string',
objectIds=[
'string',
],
evaluateExpressions=True|False,
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'pipelineObjects': [
{
'id': 'string',
'name': 'string',
'fields': [
{
'key': 'string',
'stringValue': 'string',
'refValue': 'string'
},
]
},
],
'hasMoreResults': True|False,
'NextToken': 'string'
}
Response Structure
|
DataPipeline.Paginator.
list_pipelines
¶paginator = client.get_paginator('list_pipelines')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from DataPipeline.Client.list_pipelines()
.
Request Syntax
response_iterator = paginator.paginate(
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Parameters: | PaginationConfig (dict) -- A dictionary that provides parameters to control pagination.
|
---|---|
Return type: | dict |
Returns: | Response Syntax{
'pipelineIdList': [
{
'id': 'string',
'name': 'string'
},
],
'hasMoreResults': True|False,
'NextToken': 'string'
}
Response Structure
|
DataPipeline.Paginator.
query_objects
¶paginator = client.get_paginator('query_objects')
paginate
(**kwargs)¶Creates an iterator that will paginate through responses from DataPipeline.Client.query_objects()
.
Request Syntax
response_iterator = paginator.paginate(
pipelineId='string',
query={
'selectors': [
{
'fieldName': 'string',
'operator': {
'type': 'EQ'|'REF_EQ'|'LE'|'GE'|'BETWEEN',
'values': [
'string',
]
}
},
]
},
sphere='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Parameters: |
|
---|---|
Return type: | dict |
Returns: | Response Syntax {
'ids': [
'string',
],
'hasMoreResults': True|False,
'NextToken': 'string'
}
Response Structure
|