boto3.s3.inject.
bucket_load
(self, *args, **kwargs)[source]¶Calls s3.Client.list_buckets() to update the attributes of the Bucket resource.
Abstractions over S3's upload/download operations.
This module provides high level abstractions for efficient uploads/downloads. It handles several things for the user:
This module has a reasonable set of defaults. It also allows you to configure many aspects of the transfer process including:
There is no support for s3->s3 multipart copies at this time.
The simplest way to use this module is:
client = boto3.client('s3', 'us-west-2')
transfer = S3Transfer(client)
# Upload /tmp/myfile to s3://bucket/key
transfer.upload_file('/tmp/myfile', 'bucket', 'key')
# Download s3://bucket/key to /tmp/myfile
transfer.download_file('bucket', 'key', '/tmp/myfile')
The upload_file
and download_file
methods also accept
**kwargs
, which will be forwarded through to the corresponding
client operation. Here are a few examples using upload_file
:
# Making the object public
transfer.upload_file('/tmp/myfile', 'bucket', 'key',
extra_args={'ACL': 'public-read'})
# Setting metadata
transfer.upload_file('/tmp/myfile', 'bucket', 'key',
extra_args={'Metadata': {'a': 'b', 'c': 'd'}})
# Setting content type
transfer.upload_file('/tmp/myfile.json', 'bucket', 'key',
extra_args={'ContentType': "application/json"})
The S3Transfer
clas also supports progress callbacks so you can
provide transfer progress to users. Both the upload_file
and
download_file
methods take an optional callback
parameter.
Here's an example of how to print a simple progress percentage
to the user:
class ProgressPercentage(object):
def __init__(self, filename):
self._filename = filename
self._size = float(os.path.getsize(filename))
self._seen_so_far = 0
self._lock = threading.Lock()
def __call__(self, bytes_amount):
# To simplify we'll assume this is hooked up
# to a single filename.
with self._lock:
self._seen_so_far += bytes_amount
percentage = (self._seen_so_far / self._size) * 100
sys.stdout.write(
"
self._size, percentage))sys.stdout.flush()
transfer = S3Transfer(boto3.client('s3', 'us-west-2')) # Upload /tmp/myfile to s3://bucket/key and print upload progress. transfer.upload_file('/tmp/myfile', 'bucket', 'key',
callback=ProgressPercentage('/tmp/myfile'))
You can also provide a TransferConfig object to the S3Transfer object that gives you more fine grained control over the transfer. For example:
client = boto3.client('s3', 'us-west-2')
config = TransferConfig(
multipart_threshold=8 * 1024 * 1024,
max_concurrency=10,
num_download_attempts=10,
)
transfer = S3Transfer(client, config)
transfer.upload_file('/tmp/foo', 'bucket', 'key')
boto3.s3.transfer.
MultipartDownloader
(client, config, osutil, executor_cls=<class 'concurrent.futures.thread.ThreadPoolExecutor'>)[source]¶boto3.s3.transfer.
MultipartUploader
(client, config, osutil, executor_cls=<class 'concurrent.futures.thread.ThreadPoolExecutor'>)[source]¶UPLOAD_PART_ARGS
= ['SSECustomerKey', 'SSECustomerAlgorithm', 'SSECustomerKeyMD5', 'RequestPayer']¶upload_file
(filename, bucket, key, callback, extra_args)¶boto3.s3.transfer.
ReadFileChunk
(fileobj, start_byte, chunk_size, full_file_size, callback=None, enable_callback=True)[source]¶Given a file object shown below:
|___________________________________________________| 0 | | full_file_size
start_byte
Parameters: |
|
---|
from_filename
(filename, start_byte, chunk_size, callback=None, enable_callback=True)[source]¶Convenience factory function to create from a filename.
Parameters: |
|
---|---|
Return type: |
|
Returns: | A new instance of |
boto3.s3.transfer.
S3Transfer
(client, config=None, osutil=None)[source]¶ALLOWED_DOWNLOAD_ARGS
= ['VersionId', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'RequestPayer']¶ALLOWED_UPLOAD_ARGS
= ['ACL', 'CacheControl', 'ContentDisposition', 'ContentEncoding', 'ContentLanguage', 'ContentType', 'Expires', 'GrantFullControl', 'GrantRead', 'GrantReadACP', 'GrantWriteACL', 'Metadata', 'RequestPayer', 'ServerSideEncryption', 'StorageClass', 'SSECustomerAlgorithm', 'SSECustomerKey', 'SSECustomerKeyMD5', 'SSEKMSKeyId']¶download_file
(bucket, key, filename, extra_args=None, callback=None)¶Download an S3 object to a file.
This method will issue a head_object
request to determine
the size of the S3 object. This is used to determine if the
object is downloaded in parallel.
upload_file
(filename, bucket, key, callback=None, extra_args=None)¶boto3.s3.transfer.
ShutdownQueue
(maxsize=0)[source]¶A queue implementation that can be shutdown.
Shutting down a queue means that this class adds a
trigger_shutdown method that will trigger all subsequent
calls to put() to fail with a QueueShutdownError
.
It purposefully deviates from queue.Queue, and is not meant
to be a drop in replacement for queue.Queue
.
boto3.s3.transfer.
StreamReaderProgress
(stream, callback=None)[source]¶Wrapper for a read only stream that adds progress callbacks.