Voyageai refactoring and multimodal capability#412
Conversation
|
Hi, I’m Jit, a friendly security platform designed to help developers build secure applications from day zero with an MVS (Minimal viable security) mindset. In case there are security findings, they will be communicated to you as a comment inside the PR. Hope you’ll enjoy using Jit. Questions? Comments? Want to learn more? Get in touch with us. |
There was a problem hiding this comment.
Pull Request Overview
This PR refactors the VoyageAI integration to support contextual embedding models, improve token management, and enhance testing coverage. The changes include removing the default model value (requiring explicit model specification), implementing token-aware batching, adding support for the voyage-context-3 model, and updating tests to cover the new rerank-2.5 model.
Key Changes:
- Added token counting functionality and token-aware batching based on model-specific token limits
- Introduced support for contextualized embedding models (voyage-context-3) with automatic API endpoint detection
- Updated VoyageAI package dependency from 0.2.2 to 0.3.5
Reviewed Changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| tests/integration/test_vectorizers.py | Added VoyageAI-specific tests for token counting, context models, and batching; updated model references to voyage-3.5 |
| tests/integration/test_rerankers.py | Enhanced reranker test fixture to test both rerank-lite-1 and rerank-2.5 models |
| redisvl/utils/vectorize/text/voyageai.py | Implemented token-aware batching, context model support, removed default model value, added token limits dictionary |
| pyproject.toml | Updated voyageai dependency version requirement from 0.2.2 to 0.3.5 |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| def __init__( | ||
| self, | ||
| model: str = "voyage-large-2", | ||
| model: str, |
There was a problem hiding this comment.
Removing the default model value is a breaking API change. Existing code that instantiates VoyageAITextVectorizer without specifying a model will fail. Consider adding a deprecation warning in a previous release or documenting this as a breaking change in the release notes.
|
@fzowl There are some failures in CI: The job failed due to mypy type errors in redisvl/utils/vectorize/text/voyageai.py:
Solution:
This will resolve the mypy errors and allow the type checks to pass. Relevant file: redisvl/utils/vectorize/text/voyageai.py (ref: 90340c52bfc47b4944b2095e4e1b492b80bf2507) |
|
@bsbodden Yes, the contextualized_embed functions were introduced in the voyageai package 0.3.5 version, that's why i changed it in the |
|
@bsbodden Can you please recheck this change? Is there anything else i should do? |
|
@bsbodden Can you please take a look? |
f8578be to
04461cd
Compare
There was a problem hiding this comment.
Hey @fzowl, thanks for updating this PR to build on the latest vectorizer framework (and sorry for the delay reviewing) - this looks good to me. Since you made the changes, VoyageAI introduced their 4th generation models (voyage-4, etc.) - could you add those to your token limit mapping?
Also, could you rebase your PR off the current main branch for RedisVL? We've introduced some improvements to our CI/CD to allow our testing workflows to be runnable on PRs from forked repositories!
- Keep default model with DeprecationWarning when not explicitly set - Add runtime DeprecationWarning when deprecated batch_size is passed - Clarify count_tokens() is local/CPU-only in docstrings
e88dcdb to
1e971a6
Compare
🛡️ Jit Security Scan Results✅ No security findings were detected in this PR
Security scan by Jit
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 1e971a6. Configure here.
| DeprecationWarning, | ||
| stacklevel=2, | ||
| ) | ||
| else: |
There was a problem hiding this comment.
Auto-batching never activates due to base class default
High Severity
The batch_size parameter in _embed_many and _aembed_many will never be None when called through the public embed_many/aembed_many API. The base class BaseVectorizer.embed_many declares batch_size: int = 10 and always forwards it, so the if batch_size is not None check always evaluates to True. This means the deprecation warning fires on every call (even when the user didn't pass batch_size), and the model-specific auto-determined batch size from _get_batch_size() is never used.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 1e971a6. Configure here.
| Returns: | ||
| bool: True if the model is a context model, False otherwise. | ||
| """ | ||
| return "context" in self.model |
There was a problem hiding this comment.
Unused method _is_context_model and constant VOYAGE_TOTAL_TOKEN_LIMITS
Medium Severity
_is_context_model() is defined but never called anywhere in production code — the _setup method doesn't route context models to contextualized_embed. Similarly, VOYAGE_TOTAL_TOKEN_LIMITS is defined but never referenced in production code (only in tests). The docstring advertises that "Context models automatically use contextualized_embed API," but this behavior is not implemented, making the documentation misleading.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 1e971a6. Configure here.


Voyageai refactoring:
Note
Medium Risk
Updates VoyageAI embedding behavior and public surface (new deprecations for implicit
modelandbatch_size, plus new token-counting APIs), which could affect existing integrations and batching behavior. Most risk is functional/regression around embedding calls and new integration tests that depend on external tooling (API keys, Pillow, ffmpeg).Overview
Improves
VoyageAIVectorizerby adding local token counting (count_tokens/acount_tokens) and introducing aVOYAGE_TOTAL_TOKEN_LIMITStable to support token-aware batching.Deprecates instantiating the vectorizer without an explicit
model(now emits aDeprecationWarningand will remove the implicit default in a future major version) and deprecates passingbatch_sizeto_embed_many/_aembed_manyin favor of automatic batch sizing.Expands integration coverage with VoyageAI-specific tests for token counting, token limit constants, and multimodal embedding flows (text, image, video, and async), including conditional skipping when
ffmpegisn’t available.Reviewed by Cursor Bugbot for commit 1e971a6. Bugbot is set up for automated code reviews on this repo. Configure here.