To perform batch inference with Bedrock, you need to upload files to S3.
This process can be cumbersome and duplicative in nature because you need to transform your data into model specific formats.With Portkey, you can upload the file in OpenAI format and portkey will handle transforming the file into the format required by Bedrock on the fly!This is the most efficient way to
Test your data with different foundation models
Perform A/B testing with different foundation models
Perform batch inference with different foundation models
from portkey_ai import Portkey# Initialize the Portkey clientportkey = Portkey( api_key="PORTKEY_API_KEY", # Replace with your Portkey API key provider="@PROVIDER", provider="bedrock", aws_region="YOUR_AWS_REGION", aws_s3_bucket="YOUR_AWS_S3_BUCKET", aws_s3_object_key="YOUR_AWS_S3_OBJECT_KEY", aws_bedrock_model="YOUR_AWS_BEDROCK_MODEL", amz_server_side_encryption: "ENCRYPTION_TYPE", # [optional] default is aws:kms amz_server_side_encryption_aws_kms_key_id: "KMS_KEY_ID" # [optional] use this only if you want to use a KMS key to encrypt the file at rest)upload_file_response = portkey.files.create( purpose="batch", file=open("file.pdf", "rb"))print(upload_file_response)
from portkey_ai import Portkey# Initialize the Portkey clientportkey = Portkey( api_key="PORTKEY_API_KEY", # Replace with your Portkey API key provider="@PROVIDER", aws_region="YOUR_AWS_REGION",)file = portkey.files.retrieve(file_id="file_id")print(file)
from portkey_ai import Portkey# Initialize the Portkey clientportkey = Portkey( api_key="PORTKEY_API_KEY", # Replace with your Portkey API key provider="@PROVIDER", aws_region="YOUR_AWS_REGION",)file_content = portkey.files.content(file_id="file_id")print(file_content)
The following endpoints are NOT supported for Bedrock for security reasons: