Advanced Webscraping Features

This article was translated from English: Does it need improvement?
Translated
View the article in English

HttpIdentity Feature

Some website systems require the user to be logged in to view the content; in this case, we can use an HttpIdentity. Here is how you can set it up:

// Create a new instance of HttpIdentity
HttpIdentity id = new HttpIdentity();

// Set the network username and password for authentication
id.NetworkUsername = "username";
id.NetworkPassword = "pwd";

// Add the identity to the collection of identities
Identities.Add(id);
// Create a new instance of HttpIdentity
HttpIdentity id = new HttpIdentity();

// Set the network username and password for authentication
id.NetworkUsername = "username";
id.NetworkPassword = "pwd";

// Add the identity to the collection of identities
Identities.Add(id);
' Create a new instance of HttpIdentity
Dim id As New HttpIdentity()

' Set the network username and password for authentication
id.NetworkUsername = "username"
id.NetworkPassword = "pwd"

' Add the identity to the collection of identities
Identities.Add(id)
$vbLabelText   $csharpLabel

One of the most impressive and powerful features in IronWebScraper is the ability to use thousands of unique user credentials and/or browser engines to spoof or scrape websites using multiple login sessions.

public override void Init()
{
    // Set the license key for IronWebScraper
    License.LicenseKey = "LicenseKey";

    // Set the logging level to capture all logs
    this.LoggingLevel = WebScraper.LogLevel.All;

    // Assign the working directory for the output files
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";

    // Define an array of proxies
    var proxies = "IP-Proxy1:8080,IP-Proxy2:8081".Split(',');

    // Iterate over common Chrome desktop user agents
    foreach (var UA in IronWebScraper.CommonUserAgents.ChromeDesktopUserAgents)
    {
        // Iterate over the proxies
        foreach (var proxy in proxies)
        {
            // Add a new HTTP identity with specific user agent and proxy
            Identities.Add(new HttpIdentity()
            {
                UserAgent = UA,
                UseCookies = true,
                Proxy = proxy
            });
        }
    }

    // Make an initial request to the website with a parse method
    this.Request("http://www.Website.com", Parse);
}
public override void Init()
{
    // Set the license key for IronWebScraper
    License.LicenseKey = "LicenseKey";

    // Set the logging level to capture all logs
    this.LoggingLevel = WebScraper.LogLevel.All;

    // Assign the working directory for the output files
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";

    // Define an array of proxies
    var proxies = "IP-Proxy1:8080,IP-Proxy2:8081".Split(',');

    // Iterate over common Chrome desktop user agents
    foreach (var UA in IronWebScraper.CommonUserAgents.ChromeDesktopUserAgents)
    {
        // Iterate over the proxies
        foreach (var proxy in proxies)
        {
            // Add a new HTTP identity with specific user agent and proxy
            Identities.Add(new HttpIdentity()
            {
                UserAgent = UA,
                UseCookies = true,
                Proxy = proxy
            });
        }
    }

    // Make an initial request to the website with a parse method
    this.Request("http://www.Website.com", Parse);
}
Public Overrides Sub Init()
	' Set the license key for IronWebScraper
	License.LicenseKey = "LicenseKey"

	' Set the logging level to capture all logs
	Me.LoggingLevel = WebScraper.LogLevel.All

	' Assign the working directory for the output files
	Me.WorkingDirectory = AppSetting.GetAppRoot() & "\ShoppingSiteSample\Output\"

	' Define an array of proxies
	Dim proxies = "IP-Proxy1:8080,IP-Proxy2:8081".Split(","c)

	' Iterate over common Chrome desktop user agents
	For Each UA In IronWebScraper.CommonUserAgents.ChromeDesktopUserAgents
		' Iterate over the proxies
		For Each proxy In proxies
			' Add a new HTTP identity with specific user agent and proxy
			Identities.Add(New HttpIdentity() With {
				.UserAgent = UA,
				.UseCookies = True,
				.Proxy = proxy
			})
		Next proxy
	Next UA

	' Make an initial request to the website with a parse method
	Me.Request("http://www.Website.com", Parse)
End Sub
$vbLabelText   $csharpLabel

You have multiple properties to give you different behaviors, so preventing websites from blocking you.

Some of these properties include:

  • NetworkDomain: The network domain to be used for user authentication. Supports Windows, NTLM, Kerberos, Linux, BSD, and Mac OS X networks. Must be used with NetworkUsername and NetworkPassword.
  • NetworkUsername: The network/http username to be used for user authentication. Supports HTTP, Windows networks, NTLM, Kerberos, Linux networks, BSD networks, and Mac OS.
  • NetworkPassword: The network/http password to be used for user authentication. Supports HTTP, Windows networks, NTLM, Kerberos, Linux networks, BSD networks, and Mac OS.
  • Proxy: To set proxy settings.
  • UserAgent: To set a browser engine (e.g., Chrome desktop, Chrome mobile, Chrome tablet, IE, and Firefox, etc.).
  • HttpRequestHeaders: For custom header values that will be used with this identity, it accepts a dictionary object Dictionary<string, string>.
  • UseCookies: Enable/disable using cookies.

IronWebScraper runs the scraper using random identities. If we need to specify the use of a specific identity to parse a page, we can do so:

public override void Init()
{
    // Set the license key for IronWebScraper
    License.LicenseKey = "LicenseKey";

    // Set the logging level to capture all logs
    this.LoggingLevel = WebScraper.LogLevel.All;

    // Assign the working directory for the output files
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";

    // Create a new instance of HttpIdentity
    HttpIdentity identity = new HttpIdentity();

    // Set the network username and password for authentication
    identity.NetworkUsername = "username";
    identity.NetworkPassword = "pwd";

    // Add the identity to the collection of identities
    Identities.Add(identity);

    // Make a request to the website with the specified identity
    this.Request("http://www.Website.com", Parse, identity);
}
public override void Init()
{
    // Set the license key for IronWebScraper
    License.LicenseKey = "LicenseKey";

    // Set the logging level to capture all logs
    this.LoggingLevel = WebScraper.LogLevel.All;

    // Assign the working directory for the output files
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";

    // Create a new instance of HttpIdentity
    HttpIdentity identity = new HttpIdentity();

    // Set the network username and password for authentication
    identity.NetworkUsername = "username";
    identity.NetworkPassword = "pwd";

    // Add the identity to the collection of identities
    Identities.Add(identity);

    // Make a request to the website with the specified identity
    this.Request("http://www.Website.com", Parse, identity);
}
Public Overrides Sub Init()
	' Set the license key for IronWebScraper
	License.LicenseKey = "LicenseKey"

	' Set the logging level to capture all logs
	Me.LoggingLevel = WebScraper.LogLevel.All

	' Assign the working directory for the output files
	Me.WorkingDirectory = AppSetting.GetAppRoot() & "\ShoppingSiteSample\Output\"

	' Create a new instance of HttpIdentity
	Dim identity As New HttpIdentity()

	' Set the network username and password for authentication
	identity.NetworkUsername = "username"
	identity.NetworkPassword = "pwd"

	' Add the identity to the collection of identities
	Identities.Add(identity)

	' Make a request to the website with the specified identity
	Me.Request("http://www.Website.com", Parse, identity)
End Sub
$vbLabelText   $csharpLabel

Enable the Web Cache Feature

This feature is used to cache requested pages. It is often used in the development and testing phases, enabling developers to cache required pages for reuse after updating code. This enables you to execute your code on cached pages after restarting your web scraper without needing to connect to the live website every time (action-replay).

You can use it in the Init() method:

// Enable web cache without an expiration time
EnableWebCache();

// OR enable web cache with a specified expiration time
EnableWebCache(new TimeSpan(1, 30, 30));
// Enable web cache without an expiration time
EnableWebCache();

// OR enable web cache with a specified expiration time
EnableWebCache(new TimeSpan(1, 30, 30));
' Enable web cache without an expiration time
EnableWebCache()

' OR enable web cache with a specified expiration time
EnableWebCache(New TimeSpan(1, 30, 30))
$vbLabelText   $csharpLabel

It will save your cached data to the WebCache folder under the working directory folder.

public override void Init()
{
    // Set the license key for IronWebScraper
    License.LicenseKey = "LicenseKey";

    // Set the logging level to capture all logs
    this.LoggingLevel = WebScraper.LogLevel.All;

    // Assign the working directory for the output files
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";

    // Enable web cache with a specific expiration time of 1 hour, 30 minutes, and 30 seconds
    EnableWebCache(new TimeSpan(1, 30, 30));

    // Make an initial request to the website with a parse method
    this.Request("http://www.Website.com", Parse);
}
public override void Init()
{
    // Set the license key for IronWebScraper
    License.LicenseKey = "LicenseKey";

    // Set the logging level to capture all logs
    this.LoggingLevel = WebScraper.LogLevel.All;

    // Assign the working directory for the output files
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";

    // Enable web cache with a specific expiration time of 1 hour, 30 minutes, and 30 seconds
    EnableWebCache(new TimeSpan(1, 30, 30));

    // Make an initial request to the website with a parse method
    this.Request("http://www.Website.com", Parse);
}
Public Overrides Sub Init()
	' Set the license key for IronWebScraper
	License.LicenseKey = "LicenseKey"

	' Set the logging level to capture all logs
	Me.LoggingLevel = WebScraper.LogLevel.All

	' Assign the working directory for the output files
	Me.WorkingDirectory = AppSetting.GetAppRoot() & "\ShoppingSiteSample\Output\"

	' Enable web cache with a specific expiration time of 1 hour, 30 minutes, and 30 seconds
	EnableWebCache(New TimeSpan(1, 30, 30))

	' Make an initial request to the website with a parse method
	Me.Request("http://www.Website.com", Parse)
End Sub
$vbLabelText   $csharpLabel

IronWebScraper also has features to enable your engine to continue scraping after restarting the code by setting the engine start process name using Start(CrawlID).

static void Main(string[] args)
{
    // Create an object from the Scraper class
    EngineScraper scrape = new EngineScraper();

    // Start the scraping process with the specified crawl ID
    scrape.Start("enginestate");
}
static void Main(string[] args)
{
    // Create an object from the Scraper class
    EngineScraper scrape = new EngineScraper();

    // Start the scraping process with the specified crawl ID
    scrape.Start("enginestate");
}
Shared Sub Main(ByVal args() As String)
	' Create an object from the Scraper class
	Dim scrape As New EngineScraper()

	' Start the scraping process with the specified crawl ID
	scrape.Start("enginestate")
End Sub
$vbLabelText   $csharpLabel

The execution request and response will be saved in the SavedState folder inside the working directory.

Throttling

We can control the minimum and maximum connection numbers and connection speed per domain.

public override void Init()
{
    // Set the license key for IronWebScraper
    License.LicenseKey = "LicenseKey";

    // Set the logging level to capture all logs
    this.LoggingLevel = WebScraper.LogLevel.All;

    // Assign the working directory for the output files
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";

    // Set the total number of allowed open HTTP requests (threads)
    this.MaxHttpConnectionLimit = 80;

    // Set minimum polite delay (pause) between requests to a given domain or IP address
    this.RateLimitPerHost = TimeSpan.FromMilliseconds(50);

    // Set the allowed number of concurrent HTTP requests (threads) per hostname or IP address
    this.OpenConnectionLimitPerHost = 25;

    // Do not obey the robots.txt files
    this.ObeyRobotsDotTxt = false;

    // Makes the WebScraper intelligently throttle requests not only by hostname, but also by host servers' IP addresses
    this.ThrottleMode = Throttle.ByDomainHostName;

    // Make an initial request to the website with a parse method
    this.Request("https://www.Website.com", Parse);
}
public override void Init()
{
    // Set the license key for IronWebScraper
    License.LicenseKey = "LicenseKey";

    // Set the logging level to capture all logs
    this.LoggingLevel = WebScraper.LogLevel.All;

    // Assign the working directory for the output files
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";

    // Set the total number of allowed open HTTP requests (threads)
    this.MaxHttpConnectionLimit = 80;

    // Set minimum polite delay (pause) between requests to a given domain or IP address
    this.RateLimitPerHost = TimeSpan.FromMilliseconds(50);

    // Set the allowed number of concurrent HTTP requests (threads) per hostname or IP address
    this.OpenConnectionLimitPerHost = 25;

    // Do not obey the robots.txt files
    this.ObeyRobotsDotTxt = false;

    // Makes the WebScraper intelligently throttle requests not only by hostname, but also by host servers' IP addresses
    this.ThrottleMode = Throttle.ByDomainHostName;

    // Make an initial request to the website with a parse method
    this.Request("https://www.Website.com", Parse);
}
Public Overrides Sub Init()
	' Set the license key for IronWebScraper
	License.LicenseKey = "LicenseKey"

	' Set the logging level to capture all logs
	Me.LoggingLevel = WebScraper.LogLevel.All

	' Assign the working directory for the output files
	Me.WorkingDirectory = AppSetting.GetAppRoot() & "\ShoppingSiteSample\Output\"

	' Set the total number of allowed open HTTP requests (threads)
	Me.MaxHttpConnectionLimit = 80

	' Set minimum polite delay (pause) between requests to a given domain or IP address
	Me.RateLimitPerHost = TimeSpan.FromMilliseconds(50)

	' Set the allowed number of concurrent HTTP requests (threads) per hostname or IP address
	Me.OpenConnectionLimitPerHost = 25

	' Do not obey the robots.txt files
	Me.ObeyRobotsDotTxt = False

	' Makes the WebScraper intelligently throttle requests not only by hostname, but also by host servers' IP addresses
	Me.ThrottleMode = Throttle.ByDomainHostName

	' Make an initial request to the website with a parse method
	Me.Request("https://www.Website.com", Parse)
End Sub
$vbLabelText   $csharpLabel

Throttling properties

  • MaxHttpConnectionLimit
    Total number of allowed open HTTP requests (threads)
  • RateLimitPerHost
    Minimum polite delay or pause (in milliseconds) between requests to a given domain or IP address
  • OpenConnectionLimitPerHost
    Allowed number of concurrent HTTP requests (threads) per hostname
  • ThrottleMode
    Makes the WebScraper intelligently throttle requests not only by hostname, but also by host servers' IP addresses. This is polite in case multiple scraped domains are hosted on the same machine.

Get started with IronWebscraper

Commencez à utiliser IronWebScraper dans votre projet aujourd'hui avec un essai gratuit.

Première étape :
green arrow pointer

Questions Fréquemment Posées

Comment puis-je authentifier les utilisateurs sur les sites nécessitant une connexion en C# ?

Vous pouvez utiliser la fonctionnalité HttpIdentity dans IronWebScraper pour authentifier les utilisateurs en configurant des propriétés telles que NetworkDomain, NetworkUsername, et NetworkPassword.

Quel est l'avantage d'utiliser le cache web pendant le développement ?

La fonction de cache web vous permet de mettre en cache les pages demandées pour une réutilisation, ce qui aide à économiser du temps et des ressources en évitant les connexions répétées aux sites en ligne, particulièrement utile pendant les phases de développement et de test.

Comment puis-je gérer plusieurs sessions de connexion dans le scraping web ?

IronWebScraper permet l'utilisation de milliers d'identifiants d'utilisateur uniques et de moteurs de navigateur pour simuler plusieurs sessions de connexion, ce qui aide à prévenir la détection et le blocage du scraper par les sites web.

Quelles sont les options de throttling avancées dans le scraping web ?

IronWebScraper offre un paramètre ThrottleMode qui gère intelligemment le throttling des requêtes basé sur les noms d'hôtes et les adresses IP, assurant une interaction respectueuse avec les environnements d'hébergement partagé.

Comment puis-je utiliser un proxy avec IronWebScraper ?

Pour utiliser un proxy, définissez un tableau de proxies et associez-les aux instances HttpIdentity dans IronWebScraper, permettant aux requêtes d'être acheminées par différentes adresses IP pour l'anonymat et le contrôle d'accès.

Comment IronWebScraper gère-t-il les délais de requête pour éviter la surcharge du serveur ?

Le paramètre RateLimitPerHost dans IronWebScraper spécifie le délai minimum entre les requêtes vers un domaine ou une adresse IP spécifique, contribuant à prévenir la surcharge du serveur en espaçant les requêtes.

Le scraping web peut-il être repris après une interruption ?

Oui, IronWebScraper peut reprendre le scraping après une interruption en utilisant la méthode Start(CrawlID), qui enregistre l'état d'exécution et reprend à partir du dernier point enregistré.

Comment puis-je contrôler le nombre de connexions HTTP simultanées dans un scraper web ?

Dans IronWebScraper, vous pouvez définir la propriété MaxHttpConnectionLimit pour contrôler le nombre total de requêtes HTTP ouvertes autorisées, aidant à gérer la charge du serveur et les ressources.

Quelles sont les options disponibles pour journaliser les activités de scraping web ?

IronWebScraper vous permet de définir le niveau de journalisation en utilisant la propriété LoggingLevel, permettant une journalisation exhaustive pour une analyse détaillée et un dépannage lors des opérations de scraping.

Darrius Serrant
Ingénieur logiciel Full Stack (WebOps)

Darrius Serrant est titulaire d'un baccalauréat en informatique de l'université de Miami et travaille comme ingénieur marketing WebOps Full Stack chez Iron Software. Attiré par le codage dès son plus jeune âge, il a vu l'informatique comme à la fois mystérieuse et accessible, en faisant le ...

Lire la suite
Prêt à commencer?
Nuget Téléchargements 122,916 | Version : 2025.11 vient de sortir