anyscale
#10008
validation_error
Validation failed: The current system does not support n>1. Refer to the documentation or support for more details.
This error has been identified and solved.
Reason
The 400
status error you are encountering, specifically the rayllm.backend.llm.error_handling.ValidationError
, is likely due to a validation issue with the request you are sending to the API. Here are some possible reasons:
Validation Errors
The request may contain invalid or unsupported parameters. For example, the error message mentions that
n>1 is not supported yet in aviary
, indicating that the API does not currently support the specified value for the parametern
.
Request Format Issues
The request payload might not be formatted correctly according to the API's expectations, leading to a validation error. This could include issues such as extra fields not permitted or other structural errors in the request data.
Parameter Limitations
The API may have specific limitations on the values or lengths of certain parameters, and your request is exceeding these limits. For instance, errors can occur if the input length exceeds the maximum allowed tokens or if other parameters are set beyond supported ranges.
Solution
To fix the 400
status error due to rayllm.backend.llm.error_handling.ValidationError
, you need to ensure that your request complies with the API's validation rules. Here are the key steps to resolve the issue:
Review the API documentation to understand the supported parameters and their valid ranges.
Ensure that the request payload is correctly formatted and does not include any extra or unsupported fields.
Check that the values of parameters, such as
n
, are within the supported limits.
Key actions:
Remove any unsupported parameters.
Adjust parameter values to be within the allowed ranges.
Verify the request format matches the API's expectations.
Ensure that input lengths do not exceed the maximum allowed tokens.
Suggested Links
https://github.com/ray-project/ray/issues/31370
https://community.llamaindex.ai/hi-all-WJ1Y1QcRV8v7
https://community.openai.com/t/400-model-error-call-to-llm-failed/276375
https://docs.apigee.com/api-platform/troubleshoot/runtime/400-decompressionfailureatrequest
https://discuss.ray.io/t/error-scaling-ray-serve-to-2-replicas/3181
https://discuss.ray.io/t/llm-ray-serve-problem/11862