As B mentioned in his answer, run the script on an EC2 instance for best performance. | aws s3 -profile aws cp - "s3://metadaddy-s3/$1" -expected-size $2' shĪlthough this technique does not hit the local disk, the data still has to flow from B2 to wherever this script is running, then to S3. S3api list-objects-v2 -bucket metadaddy-b2 \ ![]() It assumes you've set up the profiles as above. Here's a one-liner that copies the contents of a bucket on B2 to a bucket on S3, outputting the filename (object key) and size of each file. Failure to include this argument under these conditions may result in a failed upload due to too many parts in upload. Note that this argument is needed only when a stream is being uploaded to s3 and the size is larger than 50GB. expected-size (string) This argument specifies the expected size of a stream in terms of bytes. We currently support Amazon Web Services S3, Wasabi, Digital Ocean Spaces, Linode Objects, Backblaze B2, and DreamHost, with more to come soon. To publish your Beluga feed and mini-website, you must have an S3-compatible object storage provider account. One wrinkle is that, if the file is more than 50 GB, you will need to use the -expected-size argument to specify the file size so that the cp command can split the stream into parts for a large file upload. So you are the owner of your data, and you can host it wherever you like. ![]() % aws -profile aws s3 cp s3://metadaddy-s3/hello.txt. | aws -profile aws s3 cp - s3://metadaddy-s3/hello.txt # Copy file from Backblaze B2 to Amazon S3 ![]() It's easy to run a quick test on a single file # Write a file to Backblaze B2 | aws -profile aws s3 cp - s3:///filename.ext Note that the first aws s3 cp command also needs the -endpoint-url argument, since it can't be set in the profile: aws -profile b2 -endpoint-url ' \ ![]() Now you can specify the profiles in the two aws s3 cp commands. aws configure will prompt you for the credentials for each account: % aws configure -profile b2 The aws s3 cp command can read stdin or write to stdout by using - instead of a filename, so you can pipe two aws s3 cp commands together to read a file from Backblaze B2 and write it to Amazon S3 without it hitting the local disk.įirst, configure two AWS profiles from the command line - one for B2 and the other for AWS.
0 Comments
Leave a Reply. |