Limited Time Offer: Use code CYBER at checkout and get 50% off for your 1st month! Start Free Trial 🐝

How to use a proxy with Python Requests?

23 September 2025 (updated) | 22 min read

If you've ever messed around with scraping or automating requests in Python, you've probably run into the usual roadblocks. One minute everything's smooth, the next you're getting captchas, random 403 errors, or just radio silence from the site. That's usually the internet's polite way of saying: "Hey buddy, slow down." This is where proxies save the day. Setting up a Python Requests proxy, you can mask your real IP, spread your traffic over different addresses, and even slip past geo-restrictions that would normally block you.

Here's what we'll walk through in this guide:

  • How to plug a basic proxy into requests
  • Adding authentication and keeping creds safe with environment variables
  • Working with sessions, cookies, and making sure responses stick
  • Rotating through a list of proxies so you don't get cut off mid-scrape
  • And finally, when it's worth ditching the DIY setup and just leaning on a managed API like ScrapingBee to do the heavy lifting

By the end, you'll know how to hook up proxies to Python's requests (yep, including SOCKS5) and also when it's smarter to just outsource the headache to a dedicated scraping API.

cover image

Quick answer (TL;DR)

If you just need the shortcut, there are two easy ways to try out a proxy in Python. One uses the built-in requests library, the other leans on ScrapingBee to take care of proxy headaches at scale.

Using Python Requests directly

With plain requests, you set up a proxy dictionary and pass it along with your request. It works fine for quick experiments or if you're running your own proxy server.

import requests

# Define proxy settings (replace with your own proxy server details)
proxies = {
    "http": "http://192.168.1.100:8080",   # proxy for HTTP
    "https": "http://192.168.1.100:8080",  # proxy for HTTPS
}

# Send request through the proxy to check your public IP
# The timeout prevents the script from hanging if the proxy is slow/dead
resp = requests.get("https://api.ipify.org?format=json", proxies=proxies, timeout=10)

# Print the IP returned by the service — should match the proxy's IP, not your own
print("Proxy IP response:", resp.json())

The catch: you'll have to manage your own proxy list, deal with bans, and write retry logic yourself. It can get messy fast.

Using ScrapingBee's managed proxy

If you'd rather skip the babysitting, ScrapingBee handles the grunt work for you. It rotates premium proxies, retries failures, respects geo-location, and can even render JavaScript pages when needed. One API call = clean, reliable, unblocked access. No proxy juggling.

from scrapingbee import ScrapingBeeClient

# Initialize the client (store API key in env vars for security in real projects)
client = ScrapingBeeClient(api_key="REPLACE-WITH-YOUR-API-KEY")

# Make a request through ScrapingBee
# ScrapingBee automatically handles proxy rotation, retries, and geolocation
resp = client.get(
    "https://api.ipify.org?format=json",
    params={
        "country_code": "us",   # fetch page as if browsing from the US
        "render_js": False      # disable JS rendering (cheaper, faster)
    },
)

# Print the response (IP will come from ScrapingBee's proxy pool)
print("ScrapingBee IP response:", resp.text)

For more details, check out the ScrapingBee Documentation – Web Scraping API.

Prerequisites

Before we dive into showing how to use a proxy in Python with the requests library, make sure you've got the basics ready:

  • Python installed (3.8+ recommended). You can check with: python --version.
  • pip for installing packages (comes bundled with most Python installs). Check with: pip --version.
    • Alternatively, you can use Poetry if you prefer full project/dependency management.
  • A terminal or command prompt to run scripts.
  • A code editor (VS Code, PyCharm, or even Notepad++ will do).

That's it! If you can write and run a simple Python script, you're ready to follow along.

Install Python Requests

Before we can mess with proxies, we need the requests library. It's not included with Python by default, so let's get it installed.

The quick way: pip install

If you just want to get going, install Requests globally (or inside your current virtual environment) with:

pip install requests

To verify the install:

python -m pip show requests

That should print out the version and install path.

💡 Tip: It's usually a good idea to work inside a virtual environment (python -m venv venv) so each project has clean dependencies, but for a quick script, global install works fine too.

Finally, set up a folder for your code:

mkdir proxy-demo
cd proxy-demo

Now we're ready to write some Python proxy requests.

Using Poetry (project isolation)

If you prefer something more structured, Poetry is a popular tool for managing dependencies and virtual environments automatically. I'd recommend this approach for any serious project (though there are alternatives if you want to compare).

Create a new project:

poetry new proxy-demo
cd proxy-demo

To make sure it worked, open pyproject.toml in the project root. It should contain something like:

dependencies = [
   "requests (>=2.32.5,<3.0.0)"
]

Now you've got a neat, isolated project with requests locked into your dependency list!

Test a basic request (without proxy)

Before we dive into proxies, let's confirm everything works. If you installed Requests without Poetry, just create a demo.py in your project root. If you're using Poetry, place the file at src/proxy_demo/demo.py.

Paste this code inside:

import requests  # import the requests library

# send a GET request to a simple API that returns your current public IP in JSON format
resp = requests.get("https://api.ipify.org?format=json")

# print the JSON response (should look like: {"ip": "203.0.113.42"})
print(resp.json())

Run it:

python demo.py

Or, if you're using Poetry:

poetry run python src/proxy_demo/demo.py

You should see your current public IP in JSON, something like:

{'ip': '203.0.113.42'}

If that shows up, the Requests library is installed and working. Next up: we'll plug in a Python Requests proxy dict to change that IP.

Python Requests proxy example (basic setup)

Using a proxy with Python Requests is straightforward. All you need is a proxy server (for example, 192.168.1.100:8080) and then you pass it into Requests as a dictionary. Proxies are handy if you want to hide your IP, bypass rate limits, or scrape sites that block direct traffic. If you're new to the concept, check out what are examples of proxies for some background.

Below we'll build a small Python Requests proxy example step by step: define a proxy dictionary, send a request through it, and verify the response to confirm the proxy is working.

Create a proxy dictionary

In Python Requests, proxies are passed in as a dictionary. Both http and https should point to your proxy server. For SOCKS proxies, use the socks5h:// prefix (requires requests[socks] to be installed).

proxies = {
   "http": "http://192.168.1.100:8080",
   "https": "http://192.168.1.100:8080"
}

Send a request using the proxy

Now just pass the proxy dictionary when calling requests.get(). Adding a timeout is smart so your script doesn't hang if the proxy is slow or dead.

import requests

# your proxies here...

resp = requests.get("https://api.ipify.org?format=json", 
                    proxies=proxies, 
                    timeout=10)

Verify the proxy IP in the response

To confirm your request really went through the proxy, call a service that echoes your IP. Both https://api.ipify.org?format=json and https://httpbin.org/ip work well.

If the proxy is applied correctly, the IP you see will be different from your real one — that's your working Python proxy request.

print(resp.json())
# {"ip": "203.0.113.77"}  # should show the proxy's IP

Add authentication and environment variables

Some proxies require a username and password. In Python Requests, you can pass these credentials directly in the proxy URL, but hardcoding them in your script isn't the best idea. For anything beyond a quick test, it's safer to store them in environment variables or manage configs with a tool like Poetry. Let's go through both approaches.

Use proxy URLs with username and password

The format for an authenticated proxy is simple:

http://username:password@host:port

Here's a Python requests proxy authentication example:

proxies = {
    "http": "http://user123:pass123@192.168.1.100:8080",
    "https": "http://user123:pass123@192.168.1.100:8080"
}

Be careful with special characters (@, :, %, etc.) in your password. They may need URL encoding. For instance, p@ss:word would need to be encoded before it works in the URL:

import urllib.parse

password = "p@ss:word"
encoded_password = urllib.parse.quote(password)

proxies = {
    "http": f"http://user123:{encoded_password}@192.168.1.100:8080",
    "https": f"http://user123:{encoded_password}@192.168.1.100:8080"
}

Set HTTP_PROXY and HTTPS_PROXY variables

Hardcoding works for demos, but a cleaner way is to use environment variables. Python Requests automatically checks HTTP_PROXY and HTTPS_PROXY, so once these are set, you don't need to touch your code at all. This is especially useful in CI/CD pipelines, Docker containers, or when running the same script across multiple machines.

On Linux/macOS:

export HTTP_PROXY="http://user:pass@192.168.1.100:8080"
export HTTPS_PROXY="http://user:pass@192.168.1.100:8080"

Windows:

set HTTP_PROXY=http://user:pass@192.168.1.100:8080
set HTTPS_PROXY=http://user:pass@192.168.1.100:8080

Note: Requests also honors lowercase env vars (http_proxy, https_proxy, no_proxy).

Once these are in place, any Python proxy requests you make with requests will automatically use them. No proxies dict required in your code.

Use NO_PROXY/no_proxy to bypass the proxy for specific hosts (e.g., localhost,.internal).

Handle proxy authentication errors

When working with proxies, you'll probably run into a few common HTTP errors:

  • 407 Proxy Authentication Required — your script didn't send valid credentials.
  • 401 Unauthorized — the username/password is wrong.
  • 403 Forbidden — either your IP isn't allowed, or the proxy provider blocks the target site.

If you hit these, double-check:

  • The scheme (http://, https://, or socks5h://) matches your proxy type.
  • Credentials are correctly encoded if they contain special characters.
  • The proxy provider allows access to the site you're targeting.

With these basics in place, you can connect through almost any authenticated proxy without exposing your credentials directly in code.

Use sessions and handle responses

So far we've only sent one-off requests. That's fine for quick tests, but in real projects you usually want more control. Maybe you need to reuse settings, keep cookies between requests (like staying logged in), or make sure every request goes through the same proxy without repeating yourself.

That's where requests.Session comes in. A Session object works like a wrapper around multiple requests—it remembers your proxies, headers, cookies, and even keeps the TCP connection alive for better performance. In other words, it makes your Python proxy requests cleaner and faster.

Session is built right into the requests library, so you don't need to install anything extra.

Create a session object with proxies

A Session object is like a "browser tab" for Python Requests. Instead of starting fresh every time you call requests.get(), a session remembers things like:

  • Proxies — so you don't have to keep passing the same proxies dict over and over.
  • Headers — useful for setting a default User-Agent or API key.
  • Cookies — lets you stay "logged in" across multiple requests.
  • Connections — reuses the same TCP connection under the hood, which is faster than opening a new one for every request.

This is especially handy if you're making a bunch of Python proxy requests in a loop (e.g., scraping multiple pages).

Here's a simple example:

import requests

# Create a session (like opening a browser tab)
session = requests.Session()

# Set proxies once — applies to every request made with this session
session.proxies = {
    "http": "http://192.168.1.100:8080",
    "https": "http://192.168.1.100:8080"
}

# Make a request through the session
resp = session.get("https://httpbin.org/ip", timeout=10)

# Print the IP returned by httpbin (should be the proxy's IP)
print(resp.json())

Instead of repeating proxies=... on every call, you configure it once on the session. Now all requests you make with session.get(), session.post(), etc. will automatically go through that proxy.

Maintain cookies and login state

Another big reason to use sessions: they handle cookies for you automatically. Cookies are those little pieces of data a site uses to remember who you are: for example, when you log in, the server sends back a session cookie. Without it, you'd have to re-enter your username and password on every single request.

With a Session, you log in once, and the cookie sticks around for all future requests made with that session. That makes scraping authenticated pages or navigating through a site much easier.

import requests

s = requests.Session()

# First request: log in
login = s.post("https://example.com/login",
               data={"user": "bob", "pass": "secret"})

# The server usually sets a session cookie here
print("Stored cookies:", s.cookies)

# Next requests will automatically send those cookies
dashboard = s.get("https://example.com/dashboard")
print("Dashboard response:", dashboard.status_code)

Here, s.cookies shows what was stored after logging in. Any further calls with s.get() or s.post() will include those cookies automatically, so you stay "logged in" without extra work.

Read text, JSON, and binary responses

Once you've made a request, the Response object gives you different ways to work with the result. The nice part is you don't have to think about parsing streams manually — Requests does the heavy lifting for you.

r = session.get("https://api.ipify.org?format=json")

# Good practice: raise an error if the status code is 4xx or 5xx
r.raise_for_status()

# Raw response body as a string (useful for HTML or plain text APIs)
print("Text response:", r.text)

# If the server returned JSON, parse it into a Python dict automatically
print("JSON response:", r.json())

# For binary content (images, PDFs, etc.) use .content
img = session.get("https://httpbin.org/image/png")
with open("out.png", "wb") as f:
    f.write(img.content)
print("Saved image to out.png")

Here's the rule of thumb:

  • .text — when you expect HTML or plain text.
  • .json() — when the endpoint returns JSON (saves you from calling json.loads() yourself).
  • .content — when you're dealing with binary data like images, PDFs, or ZIP files.

This way you can handle plain text pages, JSON APIs, and file downloads all with the same clean interface.

Use SOCKS5 proxies with Python requests

Sometimes you'll run into proxies that aren't the usual HTTP/HTTPS kind — these are SOCKS proxies. No, not the ones in your granny's drawer. SOCKS is a network protocol that works at a lower level than HTTP, which means it can forward pretty much any type of traffic (not just web requests).

SOCKS5 is the latest version, and it supports things like authentication and UDP traffic. In scraping, it's often used when you want more flexibility or when your provider only gives you SOCKS endpoints.

Install the extra dependency

Requests doesn't support SOCKS out of the box. You need to install it with the socks extra, which pulls in PySocks:

pip install "requests[socks]"

Or with Poetry:

poetry add "requests[socks]"

Example: Python Requests SOCKS5 proxy

Once installed, you can use a socks5h:// URL in your proxy dictionary (socks5h ensures DNS lookups are done through the proxy instead of locally).

import requests

proxies = {
    "http": "socks5h://127.0.0.1:9050",
    "https": "socks5h://127.0.0.1:9050"
}

resp = requests.get("https://httpbin.org/ip", proxies=proxies, timeout=10)
print(resp.json())

If the proxy is working, the IP address shown will be different from your real one.

💡 Tip: SOCKS5 proxies are common in privacy tools like Tor. Just make sure your SOCKS server is running and accessible at the given host/port.

Rotate proxies and use premium services

Using a single proxy is fine for quick experiments, but it rarely holds up in real projects. Proxies can die without warning, get rate-limited, or be banned if you send too many requests from the same IP.

The simple workaround is proxy rotation: switching between multiple proxy addresses so you spread your traffic and look less suspicious. This is a common pattern in scraping scripts and is easy enough to do with Python Requests.

For bigger projects, though, managing your own proxy pool gets messy fast. You'll need to monitor which proxies are alive, replace dead ones, handle retries, and sometimes even pick proxies by country. That's where a managed service like ScrapingBee comes in: it rotates proxies, bypasses bans, and takes care of geolocation for you automatically.

👉 If you're comparing providers, here's a helpful reference: guide to choosing a proxy API.

Create a list of proxy IPs

To avoid bans and downtime, it's better to keep a small pool of proxies and switch between them. In production, you wouldn't hardcode these values — you'd usually load them from environment variables, a config file, or a secrets manager.

proxies_list = [
    {"http": "http://192.168.1.101:8080", "https": "http://192.168.1.101:8080"},
    {"http": "http://192.168.1.102:8080", "https": "http://192.168.1.102:8080"},
    {"http": "http://192.168.1.103:8080", "https": "http://192.168.1.103:8080"}
]

Each item in the list is just a Python Requests proxy dict; the same format we used earlier.

Randomly select proxies for each request

The simplest way to rotate proxies is to pick one at random for each request. That way, if one IP dies or gets blocked, your script can just try another. It's also good practice to add retries, since not every proxy in the pool will always be alive.

import requests, random

for _ in range(5):
    proxy = random.choice(proxies_list)
    try:
        r = requests.get("https://httpbin.org/ip", proxies=proxy, timeout=10)
        print("Using proxy:", proxy, "—", r.json())
        break  # success, no need to retry
    except requests.exceptions.RequestException:
        print("Proxy failed, retrying...")

This is a bare-bones example, but it gets the idea across:

  • Randomly select a proxy for each request.
  • Add a retry loop in case the chosen proxy is dead.

For production scraping, you'd probably want smarter logic (like removing failed proxies from the pool or using a library to manage rotation), but this covers the basics.

Rotating proxies with the "power of two choices"

When you're scraping at scale, using a single proxy is almost guaranteed to get you blocked. Randomly picking a proxy for each request helps, but you can still end up overloading one unlucky proxy. A smarter approach is the "power of two choices" algorithm.

Here's the idea in simple terms:

  1. You keep a list of proxies and track how many requests each one has handled.
  2. For every new request, you randomly select two proxies from the pool.
  3. You compare their counters and pick the one that has handled fewer requests.

That's it. This tiny tweak dramatically improves load distribution compared to pure randomness. No single proxy gets hammered unfairly, and your pool lasts longer. If the chosen proxy fails, you can retry immediately with the backup candidate.

import random
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

# ---- 1) Define your proxy pool ----
# Each proxy is a dict you can pass to requests.
PROXIES = [
    {"http": "http://192.168.1.101:8080", "https": "http://192.168.1.101:8080"},
    {"http": "http://192.168.1.102:8080", "https": "http://192.168.1.102:8080"},
    {"http": "http://192.168.1.103:8080", "https": "http://192.168.1.103:8080"},
]

# ---- 2) List of URLs you want to scrape ----
URLS = [
    "https://httpbin.org/ip",
    "https://api.ipify.org?format=json",
] * 5  # repeat twice = 10 requests total

# ---- 3) Helper: create a Session bound to one proxy ----
def make_session(proxy):
    s = requests.Session()

    # Attach retry logic (retry on 429 + 5xx, backoff = exponential)
    retry = Retry(
        total=2,
        backoff_factor=0.3,
        status_forcelist=[429, 500, 502, 503, 504],
        allowed_methods={"GET", "HEAD", "OPTIONS"},
        respect_retry_after_header=True,
    )
    adapter = HTTPAdapter(max_retries=retry)
    s.mount("http://", adapter)
    s.mount("https://", adapter)

    # Custom UA so we don't look like the default python-requests bot
    s.headers.update({"User-Agent": "simple-rotator/1.1"})

    # Tie this session to a specific proxy
    s.proxies = proxy
    return s

# Build one session per proxy (so each proxy has its own connection pool)
sessions = [make_session(p) for p in PROXIES]

# Track how many requests are "in flight" per proxy
loads = [0] * len(PROXIES)

# Track how many times in a row each proxy failed
fail_streak = [0] * len(PROXIES)

# Simple circuit breaker states:
# OPEN = good, HALF_OPEN = testing after cooldown, CLOSED = temporarily disabled
OPEN, HALF_OPEN, CLOSED = 0, 1, 2
state = [OPEN] * len(PROXIES)

# Thresholds for the circuit breaker
MAX_FAILS = 3     # close proxy after 3 consecutive fails
COOLDOWN = 5      # keep it closed for 5 "ticks" (requests) before retrying

# Countdown timers for closed proxies
cooldown_left = [0] * len(PROXIES)

# ---- 4) Helpers to manage circuit breaker ----
def step_cooldowns():
    """Reduce cooldown timers and reopen proxies when time is up."""
    for k in range(len(PROXIES)):
        if cooldown_left[k] > 0:
            cooldown_left[k] -= 1
            if cooldown_left[k] == 0:
                state[k] = HALF_OPEN
                fail_streak[k] = 0

def pick_two_indices():
    """Pick two distinct proxies that are not CLOSED if possible."""
    alive = [i for i, st in enumerate(state) if st != CLOSED]
    if len(alive) < 2:
        # fallback: pick any two, even if one is CLOSED
        return random.sample(range(len(PROXIES)), 2)
    return random.sample(alive, 2)

# ---- 5) Main loop: send requests ----
for url in URLS:
    step_cooldowns()

    # Pick two candidates and compare load
    i, j = pick_two_indices()
    k_primary = i if loads[i] <= loads[j] else j
    k_backup = j if k_primary == i else i

    def try_fetch(k):
        """Try to fetch one URL with proxy k, update state accordingly."""
        if state[k] == CLOSED:
            return None

        loads[k] += 1  # mark proxy as busy
        try:
            r = sessions[k].get(url, timeout=10)
            r.raise_for_status()
            fail_streak[k] = 0
            if state[k] == HALF_OPEN:
                state[k] = OPEN  # success: proxy is healthy again
            return r
        except requests.RequestException:
            fail_streak[k] += 1
            if fail_streak[k] >= MAX_FAILS:
                # too many fails → close the proxy for a cooldown period
                state[k] = CLOSED
                cooldown_left[k] = COOLDOWN
            return None
        finally:
            loads[k] -= 1  # free the slot

    # Try primary proxy first
    r = try_fetch(k_primary)
    if r is not None:
        print(f"[OK  p{k_primary}] {url} -> {r.text[:80]!r}")
        continue

    # If primary fails, try the backup
    r = try_fetch(k_backup)
    if r is not None:
        print(f"[OK  p{k_backup}] {url} -> {r.text[:80]!r}")
    else:
        print(f"[FAIL   -- ] {url} -> both choices failed")

# ---- 6) Final report ----
print("Requests still in-flight:", loads)
print("Proxy states:", state)

In this version we:

  • Spin up a requests.Session for every proxy — so each one has its own pool of connections and cookies instead of stepping on each other's toes.
  • Bolt on retry + backoff — so a random 500 or 429 doesn't kill your run.
  • Pick between two random proxies and grab the less busy one — that's the whole "power of two choices" trick, and it balances load way better than dumb randomness.
  • Add a simple circuit breaker — if a proxy keeps face-planting, we bench it for a cooldown instead of wasting requests.
  • Keep a backup candidate handy — if the first choice fails, we fire the same request through the second one.

Net result: you get a small, self-contained load balancer for your scraping jobs. No external libs, no heavy lifting — just plain Python Requests with a couple smart tweaks.

It's a solid choice for small to medium scraping jobs with a handful of proxies. For anything larger, a managed service like ScrapingBee can take over the proxy rotation, ban handling, and geolocation so you don't have to babysit your proxy pool.

Disable SSL verification if needed

Some proxies use self-signed certificates, which can trigger SSL errors. You can bypass them by setting verify=False:

r = requests.get("https://httpbin.org/ip", proxies=proxy, verify=False)

⚠️ Warning: Disabling SSL verification leaves you open to man-in-the-middle attacks. Use this only with trusted proxies or in local testing. For production, always prefer proxies with valid certificates. If you're behind a corporate proxy with TLS interception, install the corporate CA into your cert store instead of disabling verify.

Use ScrapingBee's managed proxy API

At some point, juggling your own proxy pool becomes more trouble than it's worth. Dead IPs, bans, retries, geolocation rules — it all adds complexity that takes time away from your actual project.

ScrapingBee's API solves this by giving you a single endpoint that takes care of the hard parts for you. Under the hood it:

  • Rotates through a large pool of high-quality proxies.
  • Handles geolocation so you can appear to browse from a specific country.
  • Retries failed requests automatically.
  • Optionally renders JavaScript with a headless browser if you need it.

From your side, it's just one API call — no proxy pool to maintain, no manual rotation logic. If you already have curl commands, you can even turn them into ScrapingBee-ready code with the Curl Converter.

from scrapingbee import ScrapingBeeClient

# create the client (tip: store your API key in an environment variable)
client = ScrapingBeeClient(api_key="REPLACE-WITH-YOUR-API-KEY")

# simple example: fetch a page with custom options
response = client.get(
    "https://www.scrapingbee.com/blog/",
    params={
        "block_resources": True,     # block images/CSS to save bandwidth
        "country_code": "us",        # request page as if from the US
        "render_js": True,           # run JS (uses headless browser)
        "premium_proxy": False,      # toggle premium proxies
        "js_scenario": {             # optional scripted interactions
            "instructions": [
                {"wait_for": "#slow_button"},
                {"click": "#slow_button"},
                {"scroll_x": 1000},
                {"wait": 1000},
            ]
        },
    },
    headers={"User-Agent": "my-scraper/1.0"},
    cookies={"session": "abcd"},
)

# response behaves like a normal requests.Response
print("status:", response.status_code)
print("body snippet:", response.text[:500])

For this code to work you'll need to install ScrapingBee Python client that builds on top of requests:

pip install scrapingbee

Or with Poetry:

poetry add scrapingbee

With ScrapingBee, you still write code the same way you would with Python Requests, but the messy parts — proxies, retries, bans, headless browsers — are handled for you.

Ready to build faster with fewer blocks

Manually juggling proxy lists is fine for small tests, but it quickly becomes a maintenance headache. ScrapingBee takes care of all the tricky parts — proxy rotation, geolocation, JavaScript rendering, retries — so you can stay focused on building your app or scraper.

With one API call, you get reliable, unblocked access without having to babysit your code.

👉 Give it a try and see how much smoother scraping can be. Explore the ScrapingBee pricing and start today.

Frequently asked questions

What is the purpose of using proxies with Python Requests?

Proxies mask your real IP address, help you avoid rate limits, and bypass geo-blocks. For example, if a site only allows traffic from the US, a proxy located in the US makes your script appear local, reducing the chance of blocks.

How do I set up a basic proxy with Python Requests?

Define a proxy dictionary and pass it to requests.get(). This is the most common requests library proxy example:

import requests

proxies = {"http": "http://192.168.1.100:8080",
           "https": "http://192.168.1.100:8080"}

r = requests.get("https://httpbin.org/ip", proxies=proxies, timeout=10)
print(r.json())

Can I use authenticated proxies with Python Requests?

Yes. Include the username and password directly in the proxy URL:

proxies = {
  "http": "http://user:pass@192.168.1.100:8080",
  "https": "http://user:pass@192.168.1.100:8080"
}

If your password contains special characters (@, :, %, etc.), you'll need to URL-encode it to avoid 407 Proxy Authentication Required errors.

How can I rotate proxies in my Python Requests script?

Keep a list of proxy dictionaries and select one randomly for each request. Adding retry logic ensures you don't get stuck on a dead proxy:

import requests, random

proxies = [{"http": "http://1.2.3.4:8080"}, {"http": "http://5.6.7.8:8080"}]
for _ in range(3):
    try:
        p = random.choice(proxies)
        print(requests.get("https://httpbin.org/ip", proxies=p, timeout=5).json())
        break
    except:
        print("Proxy failed, retrying...")

Can I use Python Requests with SOCKS proxies?

Yes. The Requests library can work with SOCKS proxies, but you need an extra dependency first. Install it with:

pip install "requests[socks]"

After that, you can configure your proxies dictionary using a socks5h:// URL (the h ensures DNS lookups go through the proxy):

import requests

proxies = {
  "http": "socks5h://127.0.0.1:9050",
  "https": "socks5h://127.0.0.1:9050"
}

r = requests.get("https://httpbin.org/ip", proxies=proxies, timeout=10)
print(r.json())

If the proxy is set up correctly, the IP returned will be the proxy's IP, not your real one. SOCKS5 proxies are often used in privacy tools like Tor, but they also work in scraping scenarios when your provider only offers SOCKS endpoints.

What are common proxy errors in Python Requests?

When working with proxies in Python Requests, you'll probably run into a few common errors:

  • 407 Proxy Authentication Required — your script didn't send valid proxy credentials.
  • 401 Unauthorized — the username/password is wrong.
  • 403 Forbidden — either your IP isn't allowed, or the proxy provider blocks the target site.
  • Timeouts — the proxy is dead, overloaded, or too slow to respond.

Most of these can be fixed by double-checking your proxy URL, encoding special characters in passwords, and confirming that the proxy server is alive and reachable.

Conclusion

Getting started with Python Requests with proxy is an easy way to hide your IP, dodge basic blocks, and keep your scraping code running more smoothly. In this guide we covered the essentials:

  • setting up a proxy dictionary
  • adding authentication when needed
  • using sessions to reuse cookies and speed things up
  • rotating proxies so one IP doesn't get burned out
  • and handling different response types (text, JSON, binary).

This setup is perfect for learning or small-scale projects. But once the traffic ramps up, the cracks start to show — proxies fail, rotation lags, and bans pile up. That's where a managed solution like ScrapingBee shines. It gives you fresh, high-quality proxies, automatic retries, geo-targeting, and even JavaScript rendering, all behind a single API call.

If you'd rather spend time building features instead of wrestling with dead proxies, it's worth a look. You can start free and see how much smoother scraping feels: ScrapingBee Pricing.

👉 Want fewer blocks and faster results? Let ScrapingBee handle the proxy headaches while you focus on getting the data.

image description
Maxine Meurer

Maxine is a software engineer and passionate technical writer, who enjoys spending her free time incorporating her knowledge of environmental technologies into web development.