Skip to content

Commit c85e4e3

Browse files
committed
Always cache resolvers
After getting access to a second dataset which is a lot more cache-friendly than the original (while also being a real-world sample), I think it makes sense to cache the "fast" resolvers as well: going back to the original commit (b45380d) an S3Fifo(5000) was estimated to take about 3.5MB when full (0.5MB of cache metadata, and 3MB of actual cached data), so an S3Fifo(2000) would be about 1.5MB or thereabouts which is not nothing... but is pretty much nothing compared to the RSS impact of the advanced parsers. Fixes #302
1 parent fa30fad commit c85e4e3

File tree

1 file changed

+2
-9
lines changed

1 file changed

+2
-9
lines changed

src/ua_parser/__init__.py

Lines changed: 2 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -68,14 +68,7 @@
6868
if importlib.util.find_spec("ua_parser_rs"):
6969
from .regex import Resolver as RegexResolver
7070
BestAvailableResolver: _ResolverCtor = next(
71-
filter(
72-
None,
73-
(
74-
RegexResolver,
75-
Re2Resolver,
76-
lambda m: CachingResolver(BasicResolver(m), Cache(2000)),
77-
),
78-
)
71+
filter(None, (RegexResolver, Re2Resolver, BasicResolver))
7972
)
8073

8174

@@ -97,7 +90,7 @@ def from_matchers(cls, m: Matchers, /) -> Parser:
9790
stack.
9891
9992
"""
100-
return cls(BestAvailableResolver(m))
93+
return cls(CachingResolver(BestAvailableResolver(m), Cache(2000)))
10194

10295
def __init__(self, resolver: Resolver) -> None:
10396
self.resolver = resolver

0 commit comments

Comments
 (0)