Hype Proxies

Python Requests

How to use a proxy with Python Requests

This guide walks you through integrating HypeProxies with Python Requests library to bypass IP restrictions, overcome geo-targeting, and manage rate limiting in your web scraping workflows.

Speed

Success

Consistency

Reliability

Flexibility

Prerequisites

Before we dive in, make sure your environment is set up and you have your proxy credentials ready.

1. Python and the Requests library


Python Requests documentaion

First, ensure you have Python 3.9 or newer installed. You'll also need the Requests library, which is the standard for making HTTP requests in Python. If you don't have it installed, add it to your project with:

pip install requests

2. Your HypeProxies credentials

Next, you'll need an active HypeProxies subscription to get your proxy list.

Step 1 (access your dashboard) – log into your HypeProxies dashboard. If you're new, you can request a free trial to get started.

Step 2 (find your active plan) – navigate to your active services. If you don't have a plan yet, explore our full range of high-speed proxy solutions to find the perfect fit for your project.

Step 3 (locate and copy your proxy list) – in your service details, you'll find your proxy list formatted as IP:PORT:USERNAME:PASSWORD.

HypeProxies dashboard

Each line contains a unique proxy, but your username and password are the same for all of them. Copy these proxies as you'll need them later in the tutorial.

With the setup complete, let's make our first request.

Making a single request with a proxy

Integrating a proxy with Requests is straightforward. The library accepts a proxies dictionary that maps the protocol (http or https) to your formatted proxy URL.

Let's start by using just the first proxy from your list to make a request to https://ipapi.co/json/, a simple service that returns your public IP address.

import requests
import json

# Define your proxy credentials
proxy_ip = "<PROXY_IP>"  # Replace with your proxy server IP address
proxy_port = "<PROXY_PORT>"  # Replace with your proxy server port
username = "<USERNAME>"  # Replace with your proxy username
password = "<PASSWORD>"  # Replace with your proxy password

# Format the proxy URL with authentication
proxy_url = f"http://{username}:{password}@{proxy_ip}:{proxy_port}"

# Create the proxies dictionary for Requests
proxies = {
    "http": proxy_url,
    "https": proxy_url,
}

try:
    # Make the GET request through the proxy
    response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=10)
    response.raise_for_status()  # Check if request was successful

    # Get the JSON data from response
    data = response.json()

    # Print in pretty format
    print("Success! API Response:")
    print(json.dumps(data, indent=2))

except requests.exceptions.RequestException as e:
    print(f"The request failed: {e}")

When you run this script, the output will show the IP address of your HypeProxies, confirming that your request was successfully routed.

Console output

This simple method is great for one-off tasks, but for more complex projects, you'll need a more efficient approach.

Using sessions for better performance

When your script needs to make multiple requests, creating a new connection for each one is inefficient. A much better approach is to use a requests.Session object. Sessions reuse the underlying TCP connection, which provides a significant speed boost and reduces overhead.

A Session object also persists configurations, like proxies and headers, across all requests made with it.

import requests

# Proxy credentials (replace with your actual values)
PROXY_IP = "<PROXY_IP>"
PROXY_PORT = "<PROXY_PORT>"
USERNAME = "<USERNAME>"
PASSWORD = "<PASSWORD>"

# Create a persistent session object
session = requests.Session()

# Configure proxy for all requests in this session
proxy_url = f"http://{USERNAME}:{PASSWORD}@{PROXY_IP}:{PROXY_PORT}"
session.proxies = {"http": proxy_url, "https": proxy_url}

# Set browser-like headers
session.headers.update(
    {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
    }
)

try:
    print("Testing session with proxy...")

    # Test 1: Check IP address
    response1 = session.get("https://httpbin.org/ip")
    print(f"Current IP: {response1.json()['origin']}")

    # Test 2: Check user agent
    response2 = session.get("https://httpbin.org/user-agent")
    print(f"User Agent: {response2.json()['user-agent']}")

finally:
    session.close()
    print("Session closed.")

Using sessions is a critical best practice for any serious scraping project. Now, let's learn how to manage an entire list of proxies.

Managing your proxy pool with client-side rotation

HypeProxies provides static ISP proxies, which means you get a consistent IP that never changes. While perfect for tasks needing a stable identity, for large-scale web scraping, you can't just hammer a server from a single IP. That's a fast track to getting blocked.

The solution is client-side rotation. It’s on you to intelligently distribute requests across your entire proxy pool. This tactic spreads your digital footprint, making your scraper look less like a bot and more like organic traffic. Let's walk through 2 classic rotation strategies: round-robin and random.

First, create a proxies.txt file:

<PROXY_IP_1>:<PROXY_PORT_1>:<USERNAME>:<PASSWORD> 
<PROXY_IP_2>:<PROXY_PORT_2>:<USERNAME>:<PASSWORD>
<PROXY_IP_3>:<PROXY_PORT_3>:<USERNAME>:<PASSWORD>

Here's how you can load your proxy list from the proxies.txt file and implement the rotation strategies:

import requests
import random
from itertools import cycle


def load_proxies_from_file(filepath):
    """Loads proxies from a file (format: IP:PORT:USER:PASS)"""
    proxies = []
    with open(filepath, "r") as f:
        for line in f:
            line = line.strip()
            if line:
                ip, port, username, password = line.split(":")
                proxy_url = f"http://{username}:{password}@{ip}:{port}"
                proxies.append({"http": proxy_url, "https": proxy_url})
    return proxies


# --- Main Execution ---
all_proxies = load_proxies_from_file("proxies.txt")

if not all_proxies:
    print("No proxies found in proxies.txt. Exiting.")
    exit()

# Create an iterator for round-robin rotation
proxy_cycler = cycle(all_proxies)

url = "https://api.ipify.org?format=json"

# --- 1. Round-Robin Rotation ---
# This method cycles through your proxies in order.
print("--- Testing Round-Robin Rotation ---")
for i in range(5):
    proxy = next(proxy_cycler)
    try:
        response = requests.get(url, proxies=proxy, timeout=10)
        print(f"Request {i+1} (Round-Robin): Success! IP: {response.json()['ip']}")
    except requests.exceptions.RequestException:
        print(
            f"Request {i+1} (Round-Robin): Failed with proxy {proxy['http'].split('@')[1]}"
        )

# --- 2. Random Rotation ---
# This method picks a random proxy for each request.
print("\n--- Testing Random Rotation ---")
for i in range(5):
    proxy = random.choice(all_proxies)
    try:
        response = requests.get(url, proxies=proxy, timeout=10)
        print(f"Request {i+1} (Random): Success! IP: {response.json()['ip']}")
    except requests.exceptions.RequestException:
        print(
            f"Request {i+1} (Random): Failed with proxy {proxy['http'].split('@')[1]}"
        )

This rotation pattern is fundamental for many scraping use cases, from market research to social media account management.

When you run the code, you'll see a clear difference in how the IPs are used. The round-robin approach cycles through your list sequentially, while the random approach might reuse an IP before it has used all the others.

console output

This code gives you a powerful template for distributing your requests, significantly reducing the chance of being rate-limited.

Building a resilient scraper with automatic retries

In real scraping scenarios, network requests can fail for many reasons: a temporary server error, a network glitch, or a proxy timing out. A robust script shouldn't crash on the first failure. By combining requests.Session with a retry strategy, you can build a much more resilient scraper.

This example creates a session that'll automatically retry failed requests with an exponential backoff delay, which is a respectful way to handle temporary server issues.

import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry


def load_proxies(filepath):
    """Load proxies from a file and format them for Requests."""
    with open(filepath, "r") as f:
        proxies = [line.strip() for line in f if line.strip()]
    return proxies


def create_resilient_session(proxy_string, retries=3):
    """Create a configured session with proxy and retry logic."""
    session = requests.Session()

    # Parse and set the proxy
    ip, port, username, password = proxy_string.split(":")
    proxy_url = f"http://{username}:{password}@{ip}:{port}"
    session.proxies = {"http": proxy_url, "https": proxy_url}

    # Define the retry strategy
    retry_strategy = Retry(
        total=retries,
        backoff_factor=1,  # Waits 1s, 2s, 4s between retries
        status_forcelist=[429, 500, 502, 503, 504],  # Retry on these server error codes
    )

    # Mount the adapter to the session
    adapter = HTTPAdapter(max_retries=retry_strategy)
    session.mount("http://", adapter)
    session.mount("https://", adapter)

    # Set a realistic User-Agent
    session.headers.update(
        {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
        }
    )

    return session


# --- Main Execution ---
proxy_list = load_proxies("proxies.txt")
if not proxy_list:
    print("No proxies loaded. Check your proxies.txt file.")
    exit()

# Use the first proxy for this example
selected_proxy = proxy_list[0]
print(f"Using proxy: {selected_proxy.split(':')[0]}:{selected_proxy.split(':')[1]}")

session = create_resilient_session(selected_proxy)

try:
    print("\nMaking a resilient request...")
    response = session.get("http://httpbin.org/ip", timeout=15)
    response.raise_for_status()
    print(f"Success! Response: {response.json()}")

except requests.exceptions.RequestException as e:
    print(f"Request failed after all retries: {e}")

finally:
    session.close()
    print("Session closed.")

This setup gives your scrapers a professional level of reliability, ensuring that temporary issues don't derail your entire process.

Troubleshooting common issues

Even with a perfect setup, you might run into issues. Here are some common problems and their solutions.

Problem ProxyError: Max retries exceeded or 407 Proxy Authentication Required

This error almost always points to an authentication failure. Double-check that the username and password in your script perfectly match the credentials in your HypeProxies dashboard.

Problem – requests are consistently timing out

This suggests a connectivity issue. First, check if the proxy is online using our free online proxy checker. If the proxy is active, the issue might be a firewall on your local network blocking the connection. Try increasing the timeout value in your request (e.g., timeout=30).

Problem –  I'm still getting blocked, even with a proxy

Websites use advanced techniques to detect bots. Simply hiding your IP isn't always enough. Ensure you're also rotating your User-Agent header, adding random delays between requests, and respecting the site's robots.txt file. For complex targets, explore our other integration guides for tools that can manage browser fingerprints.

Problem – some of my proxies are slow or failing

While we guarantee 99.9% uptime, individual proxies can sometimes face temporary issues. You can use our proxy checker tool to test your list in bulk. If you find a consistently failing proxy, contact our support team for a replacement.

Getting support

If you run into any trouble, our team is here to help.

Share on

Stay in the loop

Subscribe to our newsletter for the latest updates, product news, and more.

No spam. Unsubscribe at anytime.

What our clients are saying

What our clients are saying

What our clients are saying

Fast static residential IPs

Proxies plans

Quarterly

10% Off

Monthly

Best value

Starter

Perfect for testing and low-scale usage

$1.40

/ IP

$1.24

/ IP

$35

/month

$31

/month

Quarterly

Cancel at anytime

Pro

Balanced option for daily proxy needs

$1.30

/ IP

$1.16

/ IP

$65

/month

$58

/month

Quarterly

Cancel at anytime

Business

Built for scale and growing demand

$1.25

/ IP

$1.12

/ IP

$125

/month

$112

/month

Quarterly

Cancel at anytime

Enterprise

High-volume power for heavy users

$1.18

/ IP

$1.06

/ IP

$300

/month

$270

/month

Quarterly

Cancel at anytime

Proxies

Bandwidth

Threads

Speed

Support

25 IPs

Unlimited

Unlimited

10GBPS

Standard

50 IPs

Unlimited

Unlimited

10GBPS

Standard

100 IPs

Unlimited

Unlimited

10GBPS

Priority

254 IPs

Subnet

/24 private subnet
on dedicated servers

Unlimited

Unlimited

10GBPS

Dedicated

Crypto

Quarterly

10% Off

Monthly

Starter

Perfect for testing and low-scale usage

$1.40

/ IP

$1.24

/ IP

$35

/month

$31

/month

Quarterly

Cancel at anytime

Get discount below

Proxies

25 IPs

Bandwidth

Unlimited

Threads

Unlimited

Speed

10GBPS

Support

Standard

Pro

Balanced option for daily proxy needs

$1.30

/ IP

$1.16

/ IP

$65

/month

$58

/month

Quarterly

Cancel at anytime

Get discount below

Proxies

50 IPs

Bandwidth

Unlimited

Threads

Unlimited

Speed

10GBPS

Support

Standard

Popular

Business

Built for scale and growing demand

$1.25

/ IP

$1.12

/ IP

$125

/month

$112

/month

Quarterly

Cancel at anytime

Get discount below

Proxies

100 IPs

Bandwidth

Unlimited

Threads

Unlimited

Speed

10GBPS

Support

Priority

Enterprise

High-volume power for heavy users

$1.18

/ IP

$1.06

/ IP

$300

/month

$270

/month

Quarterly

Cancel at anytime

Get discount below

Proxies

254 IPs

Subnet

/24 private subnet
on dedicated servers

Bandwidth

Unlimited

Threads

Unlimited

Speed

10GBPS

Support

Dedicated

Crypto

Quarterly

10% Off

Monthly

Starter

Perfect for testing and low-scale usage

$1.40

/ IP

$1.24

/ IP

$35

/month

$31

/month

Quarterly

Cancel at anytime

Get discount below

Proxies

25 IPs

Bandwidth

Unlimited

Threads

Unlimited

Speed

10GBPS

Support

Standard

Pro

Balanced option for daily proxy needs

$1.30

/ IP

$1.16

/ IP

$65

/month

$58

/month

Quarterly

Cancel at anytime

Get discount below

Proxies

50 IPs

Bandwidth

Unlimited

Threads

Unlimited

Speed

10GBPS

Support

Standard

Popular

Business

Built for scale and growing demand

$1.25

/ IP

$1.12

/ IP

$125

/month

$112

/month

Quarterly

Cancel at anytime

Get discount below

Proxies

100 IPs

Bandwidth

Unlimited

Threads

Unlimited

Speed

10GBPS

Support

Priority

Enterprise

High-volume power for heavy users

$1.18

/ IP

$1.06

/ IP

$300

/month

$270

/month

Quarterly

Cancel at anytime

Get discount below

Proxies

254 IPs

Subnet

/24 private subnet
on dedicated servers

Bandwidth

Unlimited

Threads

Unlimited

Speed

10GBPS

Support

Dedicated

Crypto