How to make requests_cache work with many concurrent requests?

Question

I'm getting and caching (for performance) lots of URLs with something like:

import requests
import requests_cache
from multiprocessing.pool import ThreadPool

urls = ['http://www.google.com', ...]
with requests_cache.enabled():
    responses = ThreadPool(100).map(requests.get, urls)

However, I'm getting a lot of errors for:

sqlite3.OperationalError: database is locked

Clearly too many threads are accessing the cache at the same time.

So does requests_cache support some kind of transaction so that the write only occurs when all the threads have finished? E.g.

with requests_cache.enabled():
    with requests_cache.transaction():
        responses = ThreadPool(100).map(requests.get, urls)

Show source
| caching   | python   | concurrency   | python-requests   2017-01-03 02:01 0 Answers

Answers to How to make requests_cache work with many concurrent requests? ( 0 )

Leave a reply to - How to make requests_cache work with many concurrent requests?

◀ Go back