How to Scrape Data from Websites in C#

by Ahmed Aboelmagd

IronWebscraper is a .NET Library for web scraping, web data extraction, and web content parsing. It is an easy to use library that can be added to Microsoft Visual Studio projects for use in development and production.

IronWebscraper has lots of unique features and capabilities such as controlling allowed and prohibited pages, objects, media, etc. It also allows for the management of multiple identities, web cache, and lots of other features that we will cover in this tutorial.

Get started with IronWebscraper

Start using IronWebScraper in your project today with a free trial.

First Step:
green arrow pointer


Target Audience

This tutorial targets software developers with basic or advanced programming skills, who wish to build and implement solutions for advanced scraping capabilities (web sites scraping, website data gathering and extraction, websites contents parsing, web harvesting).

Webscraping has never been a simple task, with no dominant frameworks for use in C# or .NET programming environments. Iron Web Scraper was created to change this

Skills required

  1. Basic fundamentals of programming with skills using one of Microsoft Programming languages such as C# or VB.NET
  2. Basic understand of Web Technologies (HTML, JavaScript, JQuery, CSS, etc.) and how they work
  3. Basic knowledge of DOM, XPath, HTML and CSS Selectors

Tools

  1. Microsoft Visual Studio 2010 or above
  2. Web developer extensions for browsers such as web inspector for Chrome or Firebug for Firefox

Why Scrape? (Reasons and Concepts)

If you want to build a product or solution that has the capabilities to:

  1. Extract web site data
  2. Compare contents, prices, features, etc. from multiple web sites
  3. Scanning and caching websites content

If you have one or more reasons from the above, then IronWebscraper is a great library to fit your needs

How to Install IronWebScraper?

After you Create a New Project (See Appendix A) you can add IronWebScraper library to your project by automatically inserting the library using NuGet or by Manually installing the DLL.

Install using NuGet

To add IronWebScraper library to our project using NuGet we can do it using the visual interface (NuGet Package Manager) or by command using the Package Manager Console.

Using NuGet Package Manager

  1. Using mouse -> right click on project name -> Select manage NuGet Package

    AddIronWebscraperUsingGUI related to Using NuGet Package Manager

  2. From browse tab -> search for IronWebScraper -> Install

    AddIronWebscraperUsingGUI2 related to Using NuGet Package Manager

  3. Click Ok

    AddIronWebscraperUsingGUI3 related to Using NuGet Package Manager

  4. And we are Done

    AddIronWebscraperUsingGUI4 related to Using NuGet Package Manager

Using NuGet Package Console

  1. From tools -> NuGet Package Manager -> Package Manager Console

    AddIronWebscraperUsingConsole related to Using NuGet Package Console

  2. Choose Class Library Project as Default Project
  3. Run command -> Install-Package IronWebScraper

    AddIronWebscraperUsingConsole1 related to Using NuGet Package Console

Install Manually

  1. Go to ironsoftware.com
  2. Click IronWebScraper or visit its Page Directly using URL https://ironsoftware.com/csharp/webscraper
  3. Click Download DLL.
  4. Extract downloaded compressed file
  5. In visual studio right click on project -> add -> reference -> browse

    AddIronWebscraperUsingDll related to Install Manually

  6. Go to extracted folder -> netstandard2.0 -> and select All .dll files

    AddIronWebscraperUsingDll2 related to Install Manually

  7. And it’s done!

HelloScraper - Our First IronWebScraper Sample

As usual, we will start implementing the Hello Scraper App to make our first step using IronWebScraper.

  • We have Created a New Console Application with the name “IronWebScraperSample”

Steps to Create IronWebScraper Sample

  1. Create a Folder and name it “HelloScraperSample”
  2. Then a new class and name it “HelloScraper”

    HelloScraperAddClass related to Steps to Create IronWebScraper Sample

  3. Add this Code snippet to HelloScraper

    public class HelloScraper : WebScraper
    {
        /// <summary>
        /// Override this method initialize your web-scraper.
        /// Important tasks will be to Request at least one start url... and set allowed/banned domain or url patterns.
        /// </summary>
        public override void Init()
        {
            License.LicenseKey = "LicenseKey"; // Write License Key
            this.LoggingLevel = WebScraper.LogLevel.All; // All Events Are Logged
            this.Request("https://blog.scrapinghub.com", Parse);
        }
    
        /// <summary>
        /// Override this method to create the default Response handler for your web scraper.
        /// If you have multiple page types, you can add additional similar methods.
        /// </summary>
        /// <param name="response">The http Response object to parse</param>
        public override void Parse(Response response)
        {
            // set working directory for the project
            this.WorkingDirectory = AppSetting.GetAppRoot()+ @"\HelloScraperSample\Output\";
            // Loop on all Links
            foreach (var title_link in response.Css("h2.entry-title a"))
            {
                // Read Link Text
                string strTitle = title_link.TextContentClean;
                // Save Result to File
                Scrape(new ScrapedData() { { "Title", strTitle } }, "HelloScraper.json");
            }
            // Loop On All Links
            if (response.CssExists("div.prev-post > a [href]"))
            {
                // Get Link URL
                var next_page = response.Css("div.prev-post > a [href]")[0].Attributes ["href"];
                // Scrape Next URL
                this.Request(next_page, Parse);
            }
        }
    }
    public class HelloScraper : WebScraper
    {
        /// <summary>
        /// Override this method initialize your web-scraper.
        /// Important tasks will be to Request at least one start url... and set allowed/banned domain or url patterns.
        /// </summary>
        public override void Init()
        {
            License.LicenseKey = "LicenseKey"; // Write License Key
            this.LoggingLevel = WebScraper.LogLevel.All; // All Events Are Logged
            this.Request("https://blog.scrapinghub.com", Parse);
        }
    
        /// <summary>
        /// Override this method to create the default Response handler for your web scraper.
        /// If you have multiple page types, you can add additional similar methods.
        /// </summary>
        /// <param name="response">The http Response object to parse</param>
        public override void Parse(Response response)
        {
            // set working directory for the project
            this.WorkingDirectory = AppSetting.GetAppRoot()+ @"\HelloScraperSample\Output\";
            // Loop on all Links
            foreach (var title_link in response.Css("h2.entry-title a"))
            {
                // Read Link Text
                string strTitle = title_link.TextContentClean;
                // Save Result to File
                Scrape(new ScrapedData() { { "Title", strTitle } }, "HelloScraper.json");
            }
            // Loop On All Links
            if (response.CssExists("div.prev-post > a [href]"))
            {
                // Get Link URL
                var next_page = response.Css("div.prev-post > a [href]")[0].Attributes ["href"];
                // Scrape Next URL
                this.Request(next_page, Parse);
            }
        }
    }
    Public Class HelloScraper
    	Inherits WebScraper
    
    	''' <summary>
    	''' Override this method initialize your web-scraper.
    	''' Important tasks will be to Request at least one start url... and set allowed/banned domain or url patterns.
    	''' </summary>
    	Public Overrides Sub Init()
    		License.LicenseKey = "LicenseKey" ' Write License Key
    		Me.LoggingLevel = WebScraper.LogLevel.All ' All Events Are Logged
    		Me.Request("https://blog.scrapinghub.com", AddressOf Parse)
    	End Sub
    
    	''' <summary>
    	''' Override this method to create the default Response handler for your web scraper.
    	''' If you have multiple page types, you can add additional similar methods.
    	''' </summary>
    	''' <param name="response">The http Response object to parse</param>
    	Public Overrides Sub Parse(ByVal response As Response)
    		' set working directory for the project
    		Me.WorkingDirectory = AppSetting.GetAppRoot() & "\HelloScraperSample\Output\"
    		' Loop on all Links
    		For Each title_link In response.Css("h2.entry-title a")
    			' Read Link Text
    			Dim strTitle As String = title_link.TextContentClean
    			' Save Result to File
    			Scrape(New ScrapedData() From {
    				{ "Title", strTitle }
    			},
    			"HelloScraper.json")
    		Next title_link
    		' Loop On All Links
    		If response.CssExists("div.prev-post > a [href]") Then
    			' Get Link URL
    			Dim next_page = response.Css("div.prev-post > a [href]")(0).Attributes ("href")
    			' Scrape Next URL
    			Me.Request(next_page, AddressOf Parse)
    		End If
    	End Sub
    End Class
    VB   C#
  4. Now to start Scrape Add This Code Snippet To Main

    static void Main(string [] args)
    {
        // Create Object From Hello Scrape class
        HelloScraperSample.HelloScraper scrape = new HelloScraperSample.HelloScraper();
            // Start Scraping
            scrape.Start();
    }
    static void Main(string [] args)
    {
        // Create Object From Hello Scrape class
        HelloScraperSample.HelloScraper scrape = new HelloScraperSample.HelloScraper();
            // Start Scraping
            scrape.Start();
    }
    Shared Sub Main(ByVal args() As String)
    	' Create Object From Hello Scrape class
    	Dim scrape As New HelloScraperSample.HelloScraper()
    		' Start Scraping
    		scrape.Start()
    End Sub
    VB   C#
  5. The result will be saved in a file with the format WebSraper.WorkingDirecty/classname.Json

    HelloScraperFrmFileResult related to Steps to Create IronWebScraper Sample

Code Overview

Scrape.Start() => fires scrape logic as follow:

  1. Call the Init() Method first for initiating the variables, scrape properties and behavior attributes.,
  2. As we can see it sets the starting page to Request("https://blog.scrapinghub.com", Parse) and Parse (Response response) defined to be the process that used to parse response.
  3. Webscraper manages in parallel: http and threads… keeping all your code easy to debug and synchronous.
  4. Parse method start after Init() to parse the page.
    1. You can find elements using (Css selectors, Js DOM, XPath)

    2. Selected elements are casted to the type ScrapedData Class, you can cast them to any custom Class Like (Product, Employee, News, etc.)

    3. The objects saved in a file with Json Format in the (“bin/Scrape/”) Directory. Or you can set the path of the file as a parameter as we will see later in other examples.

IronWebScraper Library Functions and Options

You can find updated documentation inside the zip file that was downloaded with the manual installation method (IronWebScraper Documentation.chm File)

Or you can check online Documentation for library last update on https://ironsoftware.com/csharp/webscraper/object-reference/

To start using IronWebscraper in your project you must inherit from (IronWebScraper.WebScraper) class that extends your class library and adds scraping functionality to it.

Also you must implement {Init(), Parse(Response response)} methods.

namespace IronWebScraperEngine
{
    public class NewsScraper : IronWebScraper.WebScraper
    {
        public override void Init()
        {
            throw new NotImplementedException();
        }

        public override void Parse(Response response)
        {
            throw new NotImplementedException();
        }
    }
}
namespace IronWebScraperEngine
{
    public class NewsScraper : IronWebScraper.WebScraper
    {
        public override void Init()
        {
            throw new NotImplementedException();
        }

        public override void Parse(Response response)
        {
            throw new NotImplementedException();
        }
    }
}
Namespace IronWebScraperEngine
	Public Class NewsScraper
		Inherits IronWebScraper.WebScraper

		Public Overrides Sub Init()
			Throw New NotImplementedException()
		End Sub

		Public Overrides Sub Parse(ByVal response As Response)
			Throw New NotImplementedException()
		End Sub
	End Class
End Namespace
VB   C#
Properties \ functionsTypeDescription
Init ()Methodused to setup the scraper
Parse (Response response)MethodUsed to implement the logic that the scraper will use and how it will process it.
Coming table contain list of methods and properties that IronWebScraper Library are providing
NOTE : Can implement multiple method for different pages behaviors or structures
  • BannedUrls
  • AllowedUrls
  • BannedDomains
CollectionsUsed to ban/Allow/ URLs And/Or Domains
Ex: BannedUrls.Add ("*.zip", "*.exe", "*.gz", "*.pdf");
Note:
  • You can use ( * and/or ? ) wildcards
  • You can use strings and regular expressions
  • BannedUrls, AllowedUrls, BannedDomains, AllowedDomains
  • BannedUrls.Add ("*.zip", "*.exe", "*.gz", "*.pdf");
  • *? glob sale wildcards
  • strings and regular expressions
  • you can override this behavior by overriding the method: public virtual bool AcceptUrl (string url)
ObeyRobotsDotTxtBooleanUsed to enable or disable read and follow robots.txt its directive or not
public override bool ObeyRobotsDotTxtForHost (string Host)MethodUsed to enable or disable read and follow robots.txt its directive or not for certain domain
ScrapeMethod
ScrapeUniqueMethod
ThrottleModeEnumeration
EnableWebCache ()Method
EnableWebCache (TimeSpan cacheDuration)Method
MaxHttpConnectionLimitInt
RateLimitPerHostTimeSpan
OpenConnectionLimitPerHostInt
ObeyRobotsDotTxtBoolean
ThrottleModeEnumEnum Options:
  • ByIpAddress
  • ByDomainHostName
SetSiteSpecificCrawlRateLimit (string hostName, TimeSpan crawlRate)Method
IdentitiesCollectionsA list of HttpIdentity () to be used to fetch web resources.

Each Identity may have a different proxy IP addresses, user Agent, http headers, Persistent cookies, username and password.
Best practice is to create Identities in your WebScraper.Init Method and Add Them to this WebScraper.Identities List.
WorkingDirectorystringSetting working directory that will be used for all scrape related data will be stored to disk.


Real World Samples and Practice

Scraping an Online Movie Website

Let’s start another example from a real world website. We'll choose to scrape a movie website.

Let’s add a new class and name it “MovieScraper”:

MovieScrapaerAddClass related to Scraping an Online Movie Website

Now let’s take a look on the site that we will scrape:

123movies related to Scraping an Online Movie Website

This is part of the homepage HTML we see on the website:

<div id="movie-featured" class="movies-list movies-list-full tab-pane in fade active">
    <div data-movie-id="20746" class="ml-item">
        <a href="https://website.com/film/king-arthur-legend-of-the-sword-20746/">
            <span class="mli-quality">CAM</span>
            <img data-original="https://img.gocdn.online/2017/05/16/poster/2116d6719c710eabe83b377463230fbe-king-arthur-legend-of-the-sword.jpg" 
                 class="lazy thumb mli-thumb" alt="King Arthur: Legend of the Sword"
                  src="https://img.gocdn.online/2017/05/16/poster/2116d6719c710eabe83b377463230fbe-king-arthur-legend-of-the-sword.jpg" 
                 style="display: inline-block;">
            <span class="mli-info"><h2>King Arthur: Legend of the Sword</h2></span>
        </a>
    </div>
    <div data-movie-id="20724" class="ml-item">
        <a href="https://website.com/film/snatched-20724/" >
            <span class="mli-quality">CAM</span>
            <img data-original="https://img.gocdn.online/2017/05/16/poster/5ef66403dc331009bdb5aa37cfe819ba-snatched.jpg" 
                 class="lazy thumb mli-thumb" alt="Snatched" 
                 src="https://img.gocdn.online/2017/05/16/poster/5ef66403dc331009bdb5aa37cfe819ba-snatched.jpg" 
                 style="display: inline-block;">
            <span class="mli-info"><h2>Snatched</h2></span>
        </a>
    </div>
</div>
<div id="movie-featured" class="movies-list movies-list-full tab-pane in fade active">
    <div data-movie-id="20746" class="ml-item">
        <a href="https://website.com/film/king-arthur-legend-of-the-sword-20746/">
            <span class="mli-quality">CAM</span>
            <img data-original="https://img.gocdn.online/2017/05/16/poster/2116d6719c710eabe83b377463230fbe-king-arthur-legend-of-the-sword.jpg" 
                 class="lazy thumb mli-thumb" alt="King Arthur: Legend of the Sword"
                  src="https://img.gocdn.online/2017/05/16/poster/2116d6719c710eabe83b377463230fbe-king-arthur-legend-of-the-sword.jpg" 
                 style="display: inline-block;">
            <span class="mli-info"><h2>King Arthur: Legend of the Sword</h2></span>
        </a>
    </div>
    <div data-movie-id="20724" class="ml-item">
        <a href="https://website.com/film/snatched-20724/" >
            <span class="mli-quality">CAM</span>
            <img data-original="https://img.gocdn.online/2017/05/16/poster/5ef66403dc331009bdb5aa37cfe819ba-snatched.jpg" 
                 class="lazy thumb mli-thumb" alt="Snatched" 
                 src="https://img.gocdn.online/2017/05/16/poster/5ef66403dc331009bdb5aa37cfe819ba-snatched.jpg" 
                 style="display: inline-block;">
            <span class="mli-info"><h2>Snatched</h2></span>
        </a>
    </div>
</div>
HTML

As we can see, we have a movie ID, Title, and Link to Detailed Page.

Let’s start to scrape this set of data:

public class MovieScraper : WebScraper
{
    public override void Init()
    {
        License.LicenseKey = "LicenseKey";
        this.LoggingLevel = WebScraper.LogLevel.All;
        this.WorkingDirectory = AppSetting.GetAppRoot() + @"\MovieSample\Output\";
        this.Request("www.website.com", Parse);
    }
    public override void Parse(Response response)
    {
        foreach (var Divs in response.Css("#movie-featured > div"))
        {
            if (Divs.Attributes ["class"] != "clearfix")
            {
                var MovieId = Divs.GetAttribute("data-movie-id");
                var link = Divs.Css("a")[0];
                var MovieTitle = link.TextContentClean;
                Scrape(new ScrapedData() { { "MovieId", MovieId }, { "MovieTitle", MovieTitle } }, "Movie.Jsonl");
            }
        }           
    }
}
public class MovieScraper : WebScraper
{
    public override void Init()
    {
        License.LicenseKey = "LicenseKey";
        this.LoggingLevel = WebScraper.LogLevel.All;
        this.WorkingDirectory = AppSetting.GetAppRoot() + @"\MovieSample\Output\";
        this.Request("www.website.com", Parse);
    }
    public override void Parse(Response response)
    {
        foreach (var Divs in response.Css("#movie-featured > div"))
        {
            if (Divs.Attributes ["class"] != "clearfix")
            {
                var MovieId = Divs.GetAttribute("data-movie-id");
                var link = Divs.Css("a")[0];
                var MovieTitle = link.TextContentClean;
                Scrape(new ScrapedData() { { "MovieId", MovieId }, { "MovieTitle", MovieTitle } }, "Movie.Jsonl");
            }
        }           
    }
}
Public Class MovieScraper
	Inherits WebScraper

	Public Overrides Sub Init()
		License.LicenseKey = "LicenseKey"
		Me.LoggingLevel = WebScraper.LogLevel.All
		Me.WorkingDirectory = AppSetting.GetAppRoot() & "\MovieSample\Output\"
		Me.Request("www.website.com", AddressOf Parse)
	End Sub
	Public Overrides Sub Parse(ByVal response As Response)
		For Each Divs In response.Css("#movie-featured > div")
			If Divs.Attributes ("class") <> "clearfix" Then
				Dim MovieId = Divs.GetAttribute("data-movie-id")
				Dim link = Divs.Css("a")(0)
				Dim MovieTitle = link.TextContentClean
				Scrape(New ScrapedData() From {
					{ "MovieId", MovieId },
					{ "MovieTitle", MovieTitle }
				},
				"Movie.Jsonl")
			End If
		Next Divs
	End Sub
End Class
VB   C#

What’s new in this code?

The Working Directory property is used to set the main working directory for all scraped data and its related files.

Let's do more.

What if we need to build typed objects that will hold scraped data in formatted objects?

Let’s implement a movie class that will hold our formatted data:

public class Movie
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string URL { get; set; }

}
public class Movie
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string URL { get; set; }

}
Public Class Movie
	Public Property Id() As Integer
	Public Property Title() As String
	Public Property URL() As String

End Class
VB   C#

Now we will update our code:

public class MovieScraper : WebScraper
{
    public override void Init()
    {
        License.LicenseKey = "LicenseKey";
        this.LoggingLevel = WebScraper.LogLevel.All;
        this.WorkingDirectory = AppSetting.GetAppRoot() + @"\MovieSample\Output\";
        this.Request("https://website.com/", Parse);
    }
    public override void Parse(Response response)
    {
        foreach (var Divs in response.Css("#movie-featured > div"))
        {
            if (Divs.Attributes ["class"] != "clearfix")
            {
                var movie = new Movie();
                movie.Id = Convert.ToInt32( Divs.GetAttribute("data-movie-id"));
                var link = Divs.Css("a")[0];
                movie.Title = link.TextContentClean;
                movie.URL = link.Attributes ["href"];
                Scrape(movie, "Movie.Jsonl");
            }
        }
    }
}
public class MovieScraper : WebScraper
{
    public override void Init()
    {
        License.LicenseKey = "LicenseKey";
        this.LoggingLevel = WebScraper.LogLevel.All;
        this.WorkingDirectory = AppSetting.GetAppRoot() + @"\MovieSample\Output\";
        this.Request("https://website.com/", Parse);
    }
    public override void Parse(Response response)
    {
        foreach (var Divs in response.Css("#movie-featured > div"))
        {
            if (Divs.Attributes ["class"] != "clearfix")
            {
                var movie = new Movie();
                movie.Id = Convert.ToInt32( Divs.GetAttribute("data-movie-id"));
                var link = Divs.Css("a")[0];
                movie.Title = link.TextContentClean;
                movie.URL = link.Attributes ["href"];
                Scrape(movie, "Movie.Jsonl");
            }
        }
    }
}
Public Class MovieScraper
	Inherits WebScraper

	Public Overrides Sub Init()
		License.LicenseKey = "LicenseKey"
		Me.LoggingLevel = WebScraper.LogLevel.All
		Me.WorkingDirectory = AppSetting.GetAppRoot() & "\MovieSample\Output\"
		Me.Request("https://website.com/", AddressOf Parse)
	End Sub
	Public Overrides Sub Parse(ByVal response As Response)
		For Each Divs In response.Css("#movie-featured > div")
			If Divs.Attributes ("class") <> "clearfix" Then
				Dim movie As New Movie()
				movie.Id = Convert.ToInt32(Divs.GetAttribute("data-movie-id"))
				Dim link = Divs.Css("a")(0)
				movie.Title = link.TextContentClean
				movie.URL = link.Attributes ("href")
				Scrape(movie, "Movie.Jsonl")
			End If
		Next Divs
	End Sub
End Class
VB   C#

What’s new?

  1. We implement Movie Class to hold our scraped data
  2. We pass movie objects to the Scrape Method and it understands our format and saves in a defined format as we can see here:

    MovieResultMovieClass related to Scraping an Online Movie Website

Let’s start scraping a more detailed page.

The Movie Page looks like this:

MovieDetailsSample related to Scraping an Online Movie Website

<div class="mvi-content">
    <div class="thumb mvic-thumb"
         style="background-image: url(https://img.gocdn.online/2017/04/28/poster/5a08e94ba02118f22dc30f298c603210-guardians-of-the-galaxy-vol-2.jpg);"></div>
    <div class="mvic-desc">
        <h3>Guardians of the Galaxy Vol. 2</h3>        
        <div class="desc">
            Set to the backdrop of Awesome Mixtape #2, Marvel's Guardians of the Galaxy Vol. 2 continues the team's adventures as they travel throughout the cosmos to help Peter Quill learn more about his true parentage.
        </div>
        <div class="mvic-info">
            <div class="mvici-left">
                <p>
                    <strong>Genre: </strong>
                    <a href="https://Domain/genre/action/" title="Action">Action</a>,
                    <a href="https://Domain/genre/adventure/" title="Adventure">Adventure</a>,
                    <a href="https://Domain/genre/sci-fi/" title="Sci-Fi">Sci-Fi</a>
                </p>
                <p>
                    <strong>Actor: </strong>
                    <a target="_blank" href="https://Domain/actor/chris-pratt" title="Chris Pratt">Chris Pratt</a>,
                    <a target="_blank" href="https://Domain/actor/-zoe-saldana" title="Zoe Saldana">Zoe Saldana</a>,
                    <a target="_blank" href="https://Domain/actor/-dave-bautista-" title="Dave Bautista">Dave Bautista</a>
                </p>
                <p>
                    <strong>Director: </strong>
                    <a href="#" title="James Gunn">James Gunn</a>
                </p>
                <p>
                    <strong>Country: </strong>
                    <a href="https://Domain/country/us" title="United States">United States</a>
                </p>
            </div>
            <div class="mvici-right">
                <p><strong>Duration:</strong> 136 min</p>
                <p><strong>Quality:</strong> <span class="quality">CAM</span></p>
                <p><strong>Release:</strong> 2017</p>
                <p><strong>IMDb:</strong> 8.3</p>
            </div>
            <div class="clearfix"></div>
        </div>
        <div class="clearfix"></div>
    </div>
    <div class="clearfix"></div>
</div>
<div class="mvi-content">
    <div class="thumb mvic-thumb"
         style="background-image: url(https://img.gocdn.online/2017/04/28/poster/5a08e94ba02118f22dc30f298c603210-guardians-of-the-galaxy-vol-2.jpg);"></div>
    <div class="mvic-desc">
        <h3>Guardians of the Galaxy Vol. 2</h3>        
        <div class="desc">
            Set to the backdrop of Awesome Mixtape #2, Marvel's Guardians of the Galaxy Vol. 2 continues the team's adventures as they travel throughout the cosmos to help Peter Quill learn more about his true parentage.
        </div>
        <div class="mvic-info">
            <div class="mvici-left">
                <p>
                    <strong>Genre: </strong>
                    <a href="https://Domain/genre/action/" title="Action">Action</a>,
                    <a href="https://Domain/genre/adventure/" title="Adventure">Adventure</a>,
                    <a href="https://Domain/genre/sci-fi/" title="Sci-Fi">Sci-Fi</a>
                </p>
                <p>
                    <strong>Actor: </strong>
                    <a target="_blank" href="https://Domain/actor/chris-pratt" title="Chris Pratt">Chris Pratt</a>,
                    <a target="_blank" href="https://Domain/actor/-zoe-saldana" title="Zoe Saldana">Zoe Saldana</a>,
                    <a target="_blank" href="https://Domain/actor/-dave-bautista-" title="Dave Bautista">Dave Bautista</a>
                </p>
                <p>
                    <strong>Director: </strong>
                    <a href="#" title="James Gunn">James Gunn</a>
                </p>
                <p>
                    <strong>Country: </strong>
                    <a href="https://Domain/country/us" title="United States">United States</a>
                </p>
            </div>
            <div class="mvici-right">
                <p><strong>Duration:</strong> 136 min</p>
                <p><strong>Quality:</strong> <span class="quality">CAM</span></p>
                <p><strong>Release:</strong> 2017</p>
                <p><strong>IMDb:</strong> 8.3</p>
            </div>
            <div class="clearfix"></div>
        </div>
        <div class="clearfix"></div>
    </div>
    <div class="clearfix"></div>
</div>
HTML

We can extend our movie class with new properties (Description, Genre, Actor, Director, Country, Duration, IMDB Score) but we’ll use (Description, Genre, Actor) only for our sample.

public class Movie
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string URL { get; set; }
    public string Description { get; set; }
    public List<string> Genre { get; set; }
    public List<string> Actor { get; set; }

}
public class Movie
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string URL { get; set; }
    public string Description { get; set; }
    public List<string> Genre { get; set; }
    public List<string> Actor { get; set; }

}
Public Class Movie
	Public Property Id() As Integer
	Public Property Title() As String
	Public Property URL() As String
	Public Property Description() As String
	Public Property Genre() As List(Of String)
	Public Property Actor() As List(Of String)

End Class
VB   C#

Now we’ll navigate to the Detailed page to scrape it.

IronWebScraper enables you to add more to the scrape function to scrape different types of page formats

As we can see here:

public class MovieScraper : WebScraper
{
    public override void Init()
    {
        License.LicenseKey = "LicenseKey";
        this.LoggingLevel = WebScraper.LogLevel.All;
        this.WorkingDirectory = AppSetting.GetAppRoot() + @"\MovieSample\Output\";
        this.Request("https://domain/", Parse);
    }
    public override void Parse(Response response)
    {
        foreach (var Divs in response.Css("#movie-featured > div"))
        {
            if (Divs.Attributes ["class"] != "clearfix")
            {
                var movie = new Movie();
                movie.Id = Convert.ToInt32( Divs.GetAttribute("data-movie-id"));
                var link = Divs.Css("a")[0];
                movie.Title = link.TextContentClean;
                movie.URL = link.Attributes ["href"];
                this.Request(movie.URL, ParseDetails, new MetaData() { { "movie", movie } });// to scrap Detailed Page
            }
        }           
    }
    public void ParseDetails(Response response)
    {
        var movie = response.MetaData.Get<Movie>("movie");
        var Div = response.Css("div.mvic-desc")[0];
        movie.Description = Div.Css("div.desc")[0].TextContentClean;
        foreach(var Genre in Div.Css("div > p > a"))
        {
            movie.Genre.Add(Genre.TextContentClean);
        }
        foreach (var Actor in Div.Css("div > p:nth-child(2) > a"))
        {
            movie.Actor.Add(Actor.TextContentClean);
        }
        Scrape(movie, "Movie.Jsonl");
    }
}
public class MovieScraper : WebScraper
{
    public override void Init()
    {
        License.LicenseKey = "LicenseKey";
        this.LoggingLevel = WebScraper.LogLevel.All;
        this.WorkingDirectory = AppSetting.GetAppRoot() + @"\MovieSample\Output\";
        this.Request("https://domain/", Parse);
    }
    public override void Parse(Response response)
    {
        foreach (var Divs in response.Css("#movie-featured > div"))
        {
            if (Divs.Attributes ["class"] != "clearfix")
            {
                var movie = new Movie();
                movie.Id = Convert.ToInt32( Divs.GetAttribute("data-movie-id"));
                var link = Divs.Css("a")[0];
                movie.Title = link.TextContentClean;
                movie.URL = link.Attributes ["href"];
                this.Request(movie.URL, ParseDetails, new MetaData() { { "movie", movie } });// to scrap Detailed Page
            }
        }           
    }
    public void ParseDetails(Response response)
    {
        var movie = response.MetaData.Get<Movie>("movie");
        var Div = response.Css("div.mvic-desc")[0];
        movie.Description = Div.Css("div.desc")[0].TextContentClean;
        foreach(var Genre in Div.Css("div > p > a"))
        {
            movie.Genre.Add(Genre.TextContentClean);
        }
        foreach (var Actor in Div.Css("div > p:nth-child(2) > a"))
        {
            movie.Actor.Add(Actor.TextContentClean);
        }
        Scrape(movie, "Movie.Jsonl");
    }
}
Public Class MovieScraper
	Inherits WebScraper

	Public Overrides Sub Init()
		License.LicenseKey = "LicenseKey"
		Me.LoggingLevel = WebScraper.LogLevel.All
		Me.WorkingDirectory = AppSetting.GetAppRoot() & "\MovieSample\Output\"
		Me.Request("https://domain/", AddressOf Parse)
	End Sub
	Public Overrides Sub Parse(ByVal response As Response)
		For Each Divs In response.Css("#movie-featured > div")
			If Divs.Attributes ("class") <> "clearfix" Then
				Dim movie As New Movie()
				movie.Id = Convert.ToInt32(Divs.GetAttribute("data-movie-id"))
				Dim link = Divs.Css("a")(0)
				movie.Title = link.TextContentClean
				movie.URL = link.Attributes ("href")
				Me.Request(movie.URL, AddressOf ParseDetails, New MetaData() From {
					{ "movie", movie }
				}) ' to scrap Detailed Page
			End If
		Next Divs
	End Sub
	Public Sub ParseDetails(ByVal response As Response)
		Dim movie = response.MetaData.Get(Of Movie)("movie")
		Dim Div = response.Css("div.mvic-desc")(0)
		movie.Description = Div.Css("div.desc")(0).TextContentClean
		For Each Genre In Div.Css("div > p > a")
			movie.Genre.Add(Genre.TextContentClean)
		Next Genre
		For Each Actor In Div.Css("div > p:nth-child(2) > a")
			movie.Actor.Add(Actor.TextContentClean)
		Next Actor
		Scrape(movie, "Movie.Jsonl")
	End Sub
End Class
VB   C#

What’s new?

  1. We can add scrape functions (ParseDetails) to scrape detailed pages
  2. We moved the Scrape function that generates our file to the new function
  3. We used the IronWebScraper feature (MetaData) to pass our movie object to the new scrape function
  4. We scraped the page and saved our movie object data to a file

MovieResultMovieClass1 related to Scraping an Online Movie Website

Scrape Content from a Shopping Website

We select a shopping site to scrape the content from it

ShoppingSite related to Scrape Content from a Shopping Website

As you can see from the image, we have a left bar that contains links for the site's product categories

So our first step is to investigate the site HTML, and plan how we wish to scrape it.

ShoppingSiteLeftBar related to Scrape Content from a Shopping Website

The fashion site categories have sub categories (Men, Women, Kids)

<li class="menu-item" data-id="">
    <a href="https://domain.com/fashion-by-/" class="main-category">
        <i class="cat-icon osh-font-fashion"></i> <span class="nav-subTxt">FASHION </span> <i class="osh-font-light-arrow-left"></i><i class="osh-font-light-arrow-right"></i>
    </a> <div class="navLayerWrapper" style="width: 633px; display: none;"><div class="submenu"><div class="column"><div class="categories"><a class="category" href="https://domain.com/fashion-by-/?sort=newest&amp;dir=desc&amp;viewType=gridView3">New Arrivals !</a>  </div><div class="categories"><a class="category" href="https://domain.com/men-fashion/">Men</a>   <a class="subcategory" href="https://domain.com/mens-shoes/">Shoes</a>   <a class="subcategory" href="https://domain.com/mens-clothing/">Clothing</a>   <a class="subcategory" href="https://domain.com/mens-accessories/">Accessories</a>  </div><div class="categories"><a class="category" href="https://domain.com/women-fashion/">Women</a>   <a class="subcategory" href="https://domain.com/womens-shoes/">Shoes</a>   <a class="subcategory" href="https://domain.com/womens-clothing/">Clothing</a>   <a class="subcategory" href="https://domain.com/womens-accessories/">Accessories</a>  </div><div class="categories"><a class="category" href="https://domain.com/girls-boys-fashion/">Kids</a>   <a class="subcategory" href="https://domain.com/boys-fashion/">Boys</a>   <a class="subcategory" href="https://domain.com/girls/">Girls</a>  </div><div class="categories"><a class="category" href="https://domain.com/maternity-clothes/">Maternity Clothes</a>  </div></div><div class="column"><div class="categories"> <span class="category defaultCursor">Men Best Sellers</span>  <a class="subcategory" href="https://domain.com/mens-casual-shoes/">Casual Shoes</a>   <a class="subcategory" href="https://domain.com/mens-sneakers/">Sneakers</a>   <a class="subcategory" href="https://domain.com/mens-t-shirts/">T-shirts</a>   <a class="subcategory" href="https://domain.com/mens-polos/">Polos</a>  </div><div class="categories"> <span class="category defaultCursor">Women Best Sellers</span>  <a class="subcategory" href="https://domain.com/womens-sandals/">Sandals</a>   <a class="subcategory" href="https://domain.com/womens-sneakers/">Sneakers</a>   <a class="subcategory" href="https://domain.com/women-dresses/">Dresses</a>   <a class="subcategory" href="https://domain.com/women-tops/">Tops</a>  </div><div class="categories"><a class="category" href="https://domain.com/womens-curvy-clothing/">Women's Curvy Clothing</a>  </div><div class="categories"><a class="category" href="https://domain.com/fashion-bundles/v/">Fashion Bundles</a>  </div><div class="categories"><a class="category" href="https://domain.com/hijab-fashion/">Hijab Fashion</a>  </div></div><div class="column"><div class="categories"><a class="category" href="https://domain.com/brands/fashion-by-/">SEE ALL BRANDS</a>   <a class="subcategory" href="https://domain.com/adidas/">Adidas</a>   <a class="subcategory" href="https://domain.com/converse/">Converse</a>   <a class="subcategory" href="https://domain.com/ravin/">Ravin</a>   <a class="subcategory" href="https://domain.com/dejavu/">Dejavu</a>   <a class="subcategory" href="https://domain.com/agu/">Agu</a>   <a class="subcategory" href="https://domain.com/activ/">Activ</a>   <a class="subcategory" href="https://domain.com/oxford--bellini--tie-house--milano/">Tie House</a>   <a class="subcategory" href="https://domain.com/shoe-room/">Shoe Room</a>   <a class="subcategory" href="https://domain.com/town-team/">Town Team</a>  </div></div></div></div>
</li>
<li class="menu-item" data-id="">
    <a href="https://domain.com/fashion-by-/" class="main-category">
        <i class="cat-icon osh-font-fashion"></i> <span class="nav-subTxt">FASHION </span> <i class="osh-font-light-arrow-left"></i><i class="osh-font-light-arrow-right"></i>
    </a> <div class="navLayerWrapper" style="width: 633px; display: none;"><div class="submenu"><div class="column"><div class="categories"><a class="category" href="https://domain.com/fashion-by-/?sort=newest&amp;dir=desc&amp;viewType=gridView3">New Arrivals !</a>  </div><div class="categories"><a class="category" href="https://domain.com/men-fashion/">Men</a>   <a class="subcategory" href="https://domain.com/mens-shoes/">Shoes</a>   <a class="subcategory" href="https://domain.com/mens-clothing/">Clothing</a>   <a class="subcategory" href="https://domain.com/mens-accessories/">Accessories</a>  </div><div class="categories"><a class="category" href="https://domain.com/women-fashion/">Women</a>   <a class="subcategory" href="https://domain.com/womens-shoes/">Shoes</a>   <a class="subcategory" href="https://domain.com/womens-clothing/">Clothing</a>   <a class="subcategory" href="https://domain.com/womens-accessories/">Accessories</a>  </div><div class="categories"><a class="category" href="https://domain.com/girls-boys-fashion/">Kids</a>   <a class="subcategory" href="https://domain.com/boys-fashion/">Boys</a>   <a class="subcategory" href="https://domain.com/girls/">Girls</a>  </div><div class="categories"><a class="category" href="https://domain.com/maternity-clothes/">Maternity Clothes</a>  </div></div><div class="column"><div class="categories"> <span class="category defaultCursor">Men Best Sellers</span>  <a class="subcategory" href="https://domain.com/mens-casual-shoes/">Casual Shoes</a>   <a class="subcategory" href="https://domain.com/mens-sneakers/">Sneakers</a>   <a class="subcategory" href="https://domain.com/mens-t-shirts/">T-shirts</a>   <a class="subcategory" href="https://domain.com/mens-polos/">Polos</a>  </div><div class="categories"> <span class="category defaultCursor">Women Best Sellers</span>  <a class="subcategory" href="https://domain.com/womens-sandals/">Sandals</a>   <a class="subcategory" href="https://domain.com/womens-sneakers/">Sneakers</a>   <a class="subcategory" href="https://domain.com/women-dresses/">Dresses</a>   <a class="subcategory" href="https://domain.com/women-tops/">Tops</a>  </div><div class="categories"><a class="category" href="https://domain.com/womens-curvy-clothing/">Women's Curvy Clothing</a>  </div><div class="categories"><a class="category" href="https://domain.com/fashion-bundles/v/">Fashion Bundles</a>  </div><div class="categories"><a class="category" href="https://domain.com/hijab-fashion/">Hijab Fashion</a>  </div></div><div class="column"><div class="categories"><a class="category" href="https://domain.com/brands/fashion-by-/">SEE ALL BRANDS</a>   <a class="subcategory" href="https://domain.com/adidas/">Adidas</a>   <a class="subcategory" href="https://domain.com/converse/">Converse</a>   <a class="subcategory" href="https://domain.com/ravin/">Ravin</a>   <a class="subcategory" href="https://domain.com/dejavu/">Dejavu</a>   <a class="subcategory" href="https://domain.com/agu/">Agu</a>   <a class="subcategory" href="https://domain.com/activ/">Activ</a>   <a class="subcategory" href="https://domain.com/oxford--bellini--tie-house--milano/">Tie House</a>   <a class="subcategory" href="https://domain.com/shoe-room/">Shoe Room</a>   <a class="subcategory" href="https://domain.com/town-team/">Town Team</a>  </div></div></div></div>
</li>
HTML

Let's set up a project

  1. Create a new Console App or Add new folder for our new sample with name “ShoppingSiteSample”
  2. Add new class with the name “ShoppingScraper”
  3. The first step will be to scrape the site categories and its sub categories

Let’s create a Categories Model:

public class Category
{
    /// <summary>
    /// Gets or sets the name.
    /// </summary>
    /// <value>
    /// The name.
    /// </value>
    public string Name { get; set; }
    /// <summary>
    /// Gets or sets the URL.
    /// </summary>
    /// <value>
    /// The URL.
    /// </value>
    public string URL { get; set; }
    /// <summary>
    /// Gets or sets the sub categories.
    /// </summary>
    /// <value>
    /// The sub categories.
    /// </value>
    public List<Category> SubCategories { get; set; }
}
public class Category
{
    /// <summary>
    /// Gets or sets the name.
    /// </summary>
    /// <value>
    /// The name.
    /// </value>
    public string Name { get; set; }
    /// <summary>
    /// Gets or sets the URL.
    /// </summary>
    /// <value>
    /// The URL.
    /// </value>
    public string URL { get; set; }
    /// <summary>
    /// Gets or sets the sub categories.
    /// </summary>
    /// <value>
    /// The sub categories.
    /// </value>
    public List<Category> SubCategories { get; set; }
}
Public Class Category
	''' <summary>
	''' Gets or sets the name.
	''' </summary>
	''' <value>
	''' The name.
	''' </value>
	Public Property Name() As String
	''' <summary>
	''' Gets or sets the URL.
	''' </summary>
	''' <value>
	''' The URL.
	''' </value>
	Public Property URL() As String
	''' <summary>
	''' Gets or sets the sub categories.
	''' </summary>
	''' <value>
	''' The sub categories.
	''' </value>
	Public Property SubCategories() As List(Of Category)
End Class
VB   C#
  1. Now let’s build our scrape logic

    public class ShoppingScraper : WebScraper
    {
    /// <summary>
    /// Override this method initialize your web-scraper.
    /// Important tasks will be to Request at least one start url... and set allowed/banned domain or url patterns.
    /// </summary>
    public override void Init()
    {
        License.LicenseKey = "LicenseKey";
        this.LoggingLevel = WebScraper.LogLevel.All;
        this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
        this.Request("www.webSite.com", Parse);
    }
    
    /// <summary>
    /// Override this method to create the default Response handler for your web scraper.
    /// If you have multiple page types, you can add additional similar methods.
    /// </summary>
    /// <param name="response">The http Response object to parse</param>
    public override void Parse(Response response)
    {
        var categoryList = new List<Category>();
    
        foreach (var Links in response.Css("#menuFixed > ul > li > a "))
        {
            var cat = new Category();
            cat.URL = Links.Attributes ["href"];
            cat.Name = Links.InnerText;
            categoryList.Add(cat);
        }
        Scrape(categoryList, "Shopping.Jsonl");
    }
    }
    public class ShoppingScraper : WebScraper
    {
    /// <summary>
    /// Override this method initialize your web-scraper.
    /// Important tasks will be to Request at least one start url... and set allowed/banned domain or url patterns.
    /// </summary>
    public override void Init()
    {
        License.LicenseKey = "LicenseKey";
        this.LoggingLevel = WebScraper.LogLevel.All;
        this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
        this.Request("www.webSite.com", Parse);
    }
    
    /// <summary>
    /// Override this method to create the default Response handler for your web scraper.
    /// If you have multiple page types, you can add additional similar methods.
    /// </summary>
    /// <param name="response">The http Response object to parse</param>
    public override void Parse(Response response)
    {
        var categoryList = new List<Category>();
    
        foreach (var Links in response.Css("#menuFixed > ul > li > a "))
        {
            var cat = new Category();
            cat.URL = Links.Attributes ["href"];
            cat.Name = Links.InnerText;
            categoryList.Add(cat);
        }
        Scrape(categoryList, "Shopping.Jsonl");
    }
    }
    Public Class ShoppingScraper
    	Inherits WebScraper
    
    ''' <summary>
    ''' Override this method initialize your web-scraper.
    ''' Important tasks will be to Request at least one start url... and set allowed/banned domain or url patterns.
    ''' </summary>
    Public Overrides Sub Init()
    	License.LicenseKey = "LicenseKey"
    	Me.LoggingLevel = WebScraper.LogLevel.All
    	Me.WorkingDirectory = AppSetting.GetAppRoot() & "\ShoppingSiteSample\Output\"
    	Me.Request("www.webSite.com", AddressOf Parse)
    End Sub
    
    ''' <summary>
    ''' Override this method to create the default Response handler for your web scraper.
    ''' If you have multiple page types, you can add additional similar methods.
    ''' </summary>
    ''' <param name="response">The http Response object to parse</param>
    Public Overrides Sub Parse(ByVal response As Response)
    	Dim categoryList = New List(Of Category)()
    
    	For Each Links In response.Css("#menuFixed > ul > li > a ")
    		Dim cat = New Category()
    		cat.URL = Links.Attributes ("href")
    		cat.Name = Links.InnerText
    		categoryList.Add(cat)
    	Next Links
    	Scrape(categoryList, "Shopping.Jsonl")
    End Sub
    End Class
    VB   C#

Scraping links from the menu

ShoppingSiteScrapeMenu related to Scrape Content from a Shopping Website

Let’s update our code to scrape the Main Categories and all its sub links

public override void Parse(Response response)
{
    // List of Categories Links (Root)
    var categoryList = new List<Category>();

    foreach (var li in response.Css("#menuFixed > ul > li"))
    {
        // List Of Main Links
        foreach (var Links in li.Css("a"))
        {
            var cat = new Category();
            cat.URL = Links.Attributes ["href"];
            cat.Name = Links.InnerText;
            cat.SubCategories = new List<Category>();
            // List of Sub Catgories Links
            foreach (var subCategory in li.Css("a [class=subcategory]"))
            {
                var subcat = new Category();
                subcat.URL = Links.Attributes ["href"];
                subcat.Name = Links.InnerText;
                // Check If Link Exist Before 
                if (cat.SubCategories.Find(c=>c.Name== subcat.Name && c.URL == subcat.URL) == null)
                {
                    // Add Sublinks
                    cat.SubCategories.Add(subcat);
                }
            }
            // Add Categories
            categoryList.Add(cat);
        }
    }
    Scrape(categoryList, "Shopping.Jsonl");
}
public override void Parse(Response response)
{
    // List of Categories Links (Root)
    var categoryList = new List<Category>();

    foreach (var li in response.Css("#menuFixed > ul > li"))
    {
        // List Of Main Links
        foreach (var Links in li.Css("a"))
        {
            var cat = new Category();
            cat.URL = Links.Attributes ["href"];
            cat.Name = Links.InnerText;
            cat.SubCategories = new List<Category>();
            // List of Sub Catgories Links
            foreach (var subCategory in li.Css("a [class=subcategory]"))
            {
                var subcat = new Category();
                subcat.URL = Links.Attributes ["href"];
                subcat.Name = Links.InnerText;
                // Check If Link Exist Before 
                if (cat.SubCategories.Find(c=>c.Name== subcat.Name && c.URL == subcat.URL) == null)
                {
                    // Add Sublinks
                    cat.SubCategories.Add(subcat);
                }
            }
            // Add Categories
            categoryList.Add(cat);
        }
    }
    Scrape(categoryList, "Shopping.Jsonl");
}
Public Overrides Sub Parse(ByVal response As Response)
	' List of Categories Links (Root)
	Dim categoryList = New List(Of Category)()

	For Each li In response.Css("#menuFixed > ul > li")
		' List Of Main Links
		For Each Links In li.Css("a")
			Dim cat = New Category()
			cat.URL = Links.Attributes ("href")
			cat.Name = Links.InnerText
			cat.SubCategories = New List(Of Category)()
			' List of Sub Catgories Links
			For Each subCategory In li.Css("a [class=subcategory]")
				Dim subcat = New Category()
				subcat.URL = Links.Attributes ("href")
				subcat.Name = Links.InnerText
				' Check If Link Exist Before 
				If cat.SubCategories.Find(Function(c) c.Name= subcat.Name AndAlso c.URL = subcat.URL) Is Nothing Then
					' Add Sublinks
					cat.SubCategories.Add(subcat)
				End If
			Next subCategory
			' Add Categories
			categoryList.Add(cat)
		Next Links
	Next li
	Scrape(categoryList, "Shopping.Jsonl")
End Sub
VB   C#

Now we have links to all site categories, let’s start scraping the products within each category

Let’s navigate to any category and check the content.

ProductSubCategoryList related to Scrape Content from a Shopping Website

Let’s see its code

<section class="products">
    <div class="sku -gallery -validate-size " data-sku="AG249FA0T2PSGNAFAMZ" ft-product-sizes="41,42,43,44,45" ft-product-color="Multicolour">
        <a class="link" href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html">
            <div class="image-wrapper default-state">
                <img class="lazy image -loaded" alt="Bundle Of 2 Sneakers - Black &amp;amp; Navy Blue" data-image-vertical="1" width="210" height="262" src="https://static.WebSite.com/p/agu-6208-488356-1-catalog_grid_3.jpg" data-sku="AG249FA0T2PSGNAFAMZ" data-src="https://static.WebSite.com/p/agu-6208-488356-1-catalog_grid_3.jpg" data-placeholder="placeholder_m_1.jpg"><noscript>&lt;img src="https://static.WebSite.com/p/agu-6208-488356-1-catalog_grid_3.jpg" width="210" height="262" class="image" /&gt;</noscript>
            </div> <h2 class="title">
                <span class="brand ">Agu&nbsp;</span>
                <span class="name" dir="ltr">Bundle Of 2 Sneakers - Black &amp; Navy Blue</span>
            </h2><div class="price-container clearfix">
                <span class="price-box">
                    <span class="price">
                        <span data-currency-iso="EGP">EGP</span>
                        <span dir="ltr" data-price="299">299</span>
                    </span>   <span class="price -old  -no-special"></span>
                </span>
            </div><div class="rating-stars"><div class="stars-container"><div class="stars" style="width: 62%"></div></div> <div class="total-ratings">(30)</div> </div>    <span class="shop-first-logo-container"><img src="http://www.WebSite.com/images/local/logos/shop_first/ShoppingSite/logo_normal.png" data-src="http://www.WebSite.com/images/local/logos/shop_first/ShoppingSite/logo_normal.png" class="lazy shop-first-logo-img -mbxs -loaded"> </span>
            <span class="osh-icon -ShoppingSite-local shop_local--logo -block -mbs -mts"></span>
            <div class="list -sizes" data-selected-sku="">
                <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=41">41</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=42">42</span>
                <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=43">43</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=44">44</span>
                <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=45">45</span>
            </div>
        </a>
    </div>
    <div class="sku -gallery -validate-size " data-sku="LE047FA01SRK4NAFAMZ" ft-product-sizes="110,115,120,125,130,135" ft-product-color="Black">
        <a class="link" href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html">
            <div class="image-wrapper default-state"><img class="lazy image -loaded" alt="Genuine Leather Belt - Black" data-image-vertical="1" width="210" height="262" src="https://static.WebSite.com/p/leather-shop-1831-030217-1-catalog_grid_3.jpg" data-sku="LE047FA01SRK4NAFAMZ" data-src="https://static.WebSite.com/p/leather-shop-1831-030217-1-catalog_grid_3.jpg" data-placeholder="placeholder_m_1.jpg"><noscript>&lt;img src="https://static.WebSite.com/p/leather-shop-1831-030217-1-catalog_grid_3.jpg" width="210" height="262" class="image" /&gt;</noscript></div>
            <h2 class="title"><span class="brand ">Leather Shop&nbsp;</span> <span class="name" dir="ltr">Genuine Leather Belt - Black</span></h2><div class="price-container clearfix">
                <span class="sale-flag-percent">-29%</span>  <span class="price-box"> <span class="price"><span data-currency-iso="EGP">EGP</span> <span dir="ltr" data-price="96">96</span> </span>   <span class="price -old "><span data-currency-iso="EGP">EGP</span> <span dir="ltr" data-price="135">135</span> </span> </span>
            </div><div class="rating-stars"><div class="stars-container"><div class="stars" style="width: 100%"></div></div> <div class="total-ratings">(1)</div> </div>
            <span class="osh-icon -ShoppingSite-local shop_local--logo -block -mbs -mts"></span>    <div class="list -sizes" data-selected-sku="">
                <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=110">110</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=115">115</span>
                <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=120">120</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=125">125</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=130">130</span>
                <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=135">135</span>
            </div>
        </a>
    </div>
</section>
<section class="products">
    <div class="sku -gallery -validate-size " data-sku="AG249FA0T2PSGNAFAMZ" ft-product-sizes="41,42,43,44,45" ft-product-color="Multicolour">
        <a class="link" href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html">
            <div class="image-wrapper default-state">
                <img class="lazy image -loaded" alt="Bundle Of 2 Sneakers - Black &amp;amp; Navy Blue" data-image-vertical="1" width="210" height="262" src="https://static.WebSite.com/p/agu-6208-488356-1-catalog_grid_3.jpg" data-sku="AG249FA0T2PSGNAFAMZ" data-src="https://static.WebSite.com/p/agu-6208-488356-1-catalog_grid_3.jpg" data-placeholder="placeholder_m_1.jpg"><noscript>&lt;img src="https://static.WebSite.com/p/agu-6208-488356-1-catalog_grid_3.jpg" width="210" height="262" class="image" /&gt;</noscript>
            </div> <h2 class="title">
                <span class="brand ">Agu&nbsp;</span>
                <span class="name" dir="ltr">Bundle Of 2 Sneakers - Black &amp; Navy Blue</span>
            </h2><div class="price-container clearfix">
                <span class="price-box">
                    <span class="price">
                        <span data-currency-iso="EGP">EGP</span>
                        <span dir="ltr" data-price="299">299</span>
                    </span>   <span class="price -old  -no-special"></span>
                </span>
            </div><div class="rating-stars"><div class="stars-container"><div class="stars" style="width: 62%"></div></div> <div class="total-ratings">(30)</div> </div>    <span class="shop-first-logo-container"><img src="http://www.WebSite.com/images/local/logos/shop_first/ShoppingSite/logo_normal.png" data-src="http://www.WebSite.com/images/local/logos/shop_first/ShoppingSite/logo_normal.png" class="lazy shop-first-logo-img -mbxs -loaded"> </span>
            <span class="osh-icon -ShoppingSite-local shop_local--logo -block -mbs -mts"></span>
            <div class="list -sizes" data-selected-sku="">
                <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=41">41</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=42">42</span>
                <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=43">43</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=44">44</span>
                <span class="js-link sku-size" data-href="http://www.WebSite.com/agu-bundle-of-2-sneakers-black-navy-blue-653884.html?size=45">45</span>
            </div>
        </a>
    </div>
    <div class="sku -gallery -validate-size " data-sku="LE047FA01SRK4NAFAMZ" ft-product-sizes="110,115,120,125,130,135" ft-product-color="Black">
        <a class="link" href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html">
            <div class="image-wrapper default-state"><img class="lazy image -loaded" alt="Genuine Leather Belt - Black" data-image-vertical="1" width="210" height="262" src="https://static.WebSite.com/p/leather-shop-1831-030217-1-catalog_grid_3.jpg" data-sku="LE047FA01SRK4NAFAMZ" data-src="https://static.WebSite.com/p/leather-shop-1831-030217-1-catalog_grid_3.jpg" data-placeholder="placeholder_m_1.jpg"><noscript>&lt;img src="https://static.WebSite.com/p/leather-shop-1831-030217-1-catalog_grid_3.jpg" width="210" height="262" class="image" /&gt;</noscript></div>
            <h2 class="title"><span class="brand ">Leather Shop&nbsp;</span> <span class="name" dir="ltr">Genuine Leather Belt - Black</span></h2><div class="price-container clearfix">
                <span class="sale-flag-percent">-29%</span>  <span class="price-box"> <span class="price"><span data-currency-iso="EGP">EGP</span> <span dir="ltr" data-price="96">96</span> </span>   <span class="price -old "><span data-currency-iso="EGP">EGP</span> <span dir="ltr" data-price="135">135</span> </span> </span>
            </div><div class="rating-stars"><div class="stars-container"><div class="stars" style="width: 100%"></div></div> <div class="total-ratings">(1)</div> </div>
            <span class="osh-icon -ShoppingSite-local shop_local--logo -block -mbs -mts"></span>    <div class="list -sizes" data-selected-sku="">
                <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=110">110</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=115">115</span>
                <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=120">120</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=125">125</span>     <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=130">130</span>
                <span class="js-link sku-size" data-href="http://www.WebSite.com/leather-shop-genuine-leather-belt-black-712030.html?size=135">135</span>
            </div>
        </a>
    </div>
</section>
HTML

Let’s build our product model for this content.

public class Product
{
    /// <summary>
    /// Gets or sets the name.
    /// </summary>
    /// <value>
    /// The name.
    /// </value>
    public string Name { get; set; }
    /// <summary>
    /// Gets or sets the price.
    /// </summary>
    /// <value>
    /// The price.
    /// </value>
    public string Price { get; set; }
    /// <summary>
    /// Gets or sets the image.
    /// </summary>
    /// <value>
    /// The image.
    /// </value>
    public string Image { get; set; }
}
public class Product
{
    /// <summary>
    /// Gets or sets the name.
    /// </summary>
    /// <value>
    /// The name.
    /// </value>
    public string Name { get; set; }
    /// <summary>
    /// Gets or sets the price.
    /// </summary>
    /// <value>
    /// The price.
    /// </value>
    public string Price { get; set; }
    /// <summary>
    /// Gets or sets the image.
    /// </summary>
    /// <value>
    /// The image.
    /// </value>
    public string Image { get; set; }
}
Public Class Product
	''' <summary>
	''' Gets or sets the name.
	''' </summary>
	''' <value>
	''' The name.
	''' </value>
	Public Property Name() As String
	''' <summary>
	''' Gets or sets the price.
	''' </summary>
	''' <value>
	''' The price.
	''' </value>
	Public Property Price() As String
	''' <summary>
	''' Gets or sets the image.
	''' </summary>
	''' <value>
	''' The image.
	''' </value>
	Public Property Image() As String
End Class
VB   C#

To scrape category pages, we add a new scrape method:

public void ParseCatgory(Response response)
{          
    // List of Products Links (Root)
    var productList = new List<Product>();

    foreach (var Links in response.Css("body > main > section.osh-content > section.products > div > a"))
    {
        var product = new Product();
        product.Name = Links.InnerText;
        product.Image = Links.Css("div.image-wrapper.default-state > img")[0].Attributes ["src"];                
        productList.Add(product);
    }

    Scrape(productList, "Products.Jsonl");
}
public void ParseCatgory(Response response)
{          
    // List of Products Links (Root)
    var productList = new List<Product>();

    foreach (var Links in response.Css("body > main > section.osh-content > section.products > div > a"))
    {
        var product = new Product();
        product.Name = Links.InnerText;
        product.Image = Links.Css("div.image-wrapper.default-state > img")[0].Attributes ["src"];                
        productList.Add(product);
    }

    Scrape(productList, "Products.Jsonl");
}
Public Sub ParseCatgory(ByVal response As Response)
	' List of Products Links (Root)
	Dim productList = New List(Of Product)()

	For Each Links In response.Css("body > main > section.osh-content > section.products > div > a")
		Dim product As New Product()
		product.Name = Links.InnerText
		product.Image = Links.Css("div.image-wrapper.default-state > img")(0).Attributes ("src")
		productList.Add(product)
	Next Links

	Scrape(productList, "Products.Jsonl")
End Sub
VB   C#

Advanced Webscraping Features

HttpIdentity Feature:

Some website systems require the user to be logged in to view the content; in this case we can use a HttpIdentity: -

HttpIdentity id = new HttpIdentity();
id.NetworkUsername = "username";
id.NetworkPassword = "pwd";
Identities.Add(id); 
HttpIdentity id = new HttpIdentity();
id.NetworkUsername = "username";
id.NetworkPassword = "pwd";
Identities.Add(id); 
Dim id As New HttpIdentity()
id.NetworkUsername = "username"
id.NetworkPassword = "pwd"
Identities.Add(id)
VB   C#

One of the most impressive and powerful features in IronWebScraper, is the ability to use thousands of unique (user’s credentials and/or browser engines) to spoof or scrape websites using multi login sessions.

public override void Init()
{
    License.LicenseKey = " LicenseKey ";
    this.LoggingLevel = WebScraper.LogLevel.All;
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
    var proxies = "IP-Proxy1: 8080,IP-Proxy2: 8081".Split(',');
    foreach (var UA in IronWebScraper.CommonUserAgents.ChromeDesktopUserAgents)
    {
        foreach (var proxy in proxies)
        {
            Identities.Add(new HttpIdentity()
            {
                UserAgent = UA,
                UseCookies = true,
                Proxy = proxy
            });
        }
    }
    this.Request("http://www.Website.com", Parse);
}
public override void Init()
{
    License.LicenseKey = " LicenseKey ";
    this.LoggingLevel = WebScraper.LogLevel.All;
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
    var proxies = "IP-Proxy1: 8080,IP-Proxy2: 8081".Split(',');
    foreach (var UA in IronWebScraper.CommonUserAgents.ChromeDesktopUserAgents)
    {
        foreach (var proxy in proxies)
        {
            Identities.Add(new HttpIdentity()
            {
                UserAgent = UA,
                UseCookies = true,
                Proxy = proxy
            });
        }
    }
    this.Request("http://www.Website.com", Parse);
}
Public Overrides Sub Init()
	License.LicenseKey = " LicenseKey "
	Me.LoggingLevel = WebScraper.LogLevel.All
	Me.WorkingDirectory = AppSetting.GetAppRoot() & "\ShoppingSiteSample\Output\"
	Dim proxies = "IP-Proxy1: 8080,IP-Proxy2: 8081".Split(","c)
	For Each UA In IronWebScraper.CommonUserAgents.ChromeDesktopUserAgents
		For Each proxy In proxies
			Identities.Add(New HttpIdentity() With {
				.UserAgent = UA,
				.UseCookies = True,
				.Proxy = proxy
			})
		Next proxy
	Next UA
	Me.Request("http://www.Website.com", Parse)
End Sub
VB   C#

You have multiple properties to give you different behaviors, so preventing websites from blocking you.

Some of these properties: -

  • NetworkDomain : The network domain to be used for user authentication. Supports Windows, NTLM , Keroberos, Linux, BSD and Mac OS X networks. Must be used with (NetworkUsername and NetworkPassword)
  • NetworkUsername : The network/http username to be used for user authentication. Supports Http, Windows networks, NTLM , Kerberos , Linux networks, BSD networks and Mac OS.
  • NetworkPassword : The network/http password to be used for user authentication. Supports Http , Windows networks, NTLM , Keroberos , Linux networks, BSD networks and Mac OS.
  • Proxy : to set proxy settings
  • UserAgent : to set browser engine (chrome desktop , chrome mobile , chrome tablet , IE and Firefox , etc.)
  • HttpRequestHeaders : for custom header values that will be used with this identity , and it accept dictionary object (Dictionary <string, string>)
  • UseCookies : enable/disable using cookies

IronWebScraper runs the scraper using random identities. If we need to specify the use of a specific identity to parse a page we can do so.

public override void Init()
{
    License.LicenseKey = " LicenseKey ";
    this.LoggingLevel = WebScraper.LogLevel.All;
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
    HttpIdentity identity = new HttpIdentity();
    identity.NetworkUsername = "username";
    identity.NetworkPassword = "pwd";
    Identities.Add(id);
    this.Request("http://www.Website.com", Parse, identity);
}
public override void Init()
{
    License.LicenseKey = " LicenseKey ";
    this.LoggingLevel = WebScraper.LogLevel.All;
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
    HttpIdentity identity = new HttpIdentity();
    identity.NetworkUsername = "username";
    identity.NetworkPassword = "pwd";
    Identities.Add(id);
    this.Request("http://www.Website.com", Parse, identity);
}
Public Overrides Sub Init()
	License.LicenseKey = " LicenseKey "
	Me.LoggingLevel = WebScraper.LogLevel.All
	Me.WorkingDirectory = AppSetting.GetAppRoot() & "\ShoppingSiteSample\Output\"
	Dim identity As New HttpIdentity()
	identity.NetworkUsername = "username"
	identity.NetworkPassword = "pwd"
	Identities.Add(id)
	Me.Request("http://www.Website.com", Parse, identity)
End Sub
VB   C#

Enable the Web Cache Feature:

This feature is used to cache Requested Pages. It is often used in development and testing phases; enabling developers to cache required pages for reuse after updating code. This enables you to execute your code on cached pages after restarting your Web scraper and not needing to connect to the live website every time (action-replay).

You can use it in Init() Method

EnableWebCache();

OR

EnableWebCache(Timespan Expiry);

It will save your cached data to the WebCache folder under the working directory folder

public override void Init()
{
    License.LicenseKey = " LicenseKey ";
    this.LoggingLevel = WebScraper.LogLevel.All;
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
    EnableWebCache(new TimeSpan(1,30,30));
    this.Request("http://www.WebSite.com", Parse);
}
public override void Init()
{
    License.LicenseKey = " LicenseKey ";
    this.LoggingLevel = WebScraper.LogLevel.All;
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
    EnableWebCache(new TimeSpan(1,30,30));
    this.Request("http://www.WebSite.com", Parse);
}
Public Overrides Sub Init()
	License.LicenseKey = " LicenseKey "
	Me.LoggingLevel = WebScraper.LogLevel.All
	Me.WorkingDirectory = AppSetting.GetAppRoot() & "\ShoppingSiteSample\Output\"
	EnableWebCache(New TimeSpan(1,30,30))
	Me.Request("http://www.WebSite.com", Parse)
End Sub
VB   C#

IronWebScraper also has features to enable your engine to continue scraping after restarting code by setting the engine start process name using Start(CrawlID)

static void Main(string [] args)
{
    // Create Object From Scraper class
    EngineScraper scrape = new EngineScraper();
    // Start Scraping
    scrape.Start("enginestate");
}
static void Main(string [] args)
{
    // Create Object From Scraper class
    EngineScraper scrape = new EngineScraper();
    // Start Scraping
    scrape.Start("enginestate");
}
Shared Sub Main(ByVal args() As String)
	' Create Object From Scraper class
	Dim scrape As New EngineScraper()
	' Start Scraping
	scrape.Start("enginestate")
End Sub
VB   C#

The execution request and response will be saved in the SavedState folder inside the working directory.

Throttling

We can control the minimum and maximum connection numbers and connection speed per domain.

public override void Init()
{
    License.LicenseKey = "LicenseKey";
    this.LoggingLevel = WebScraper.LogLevel.All;
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
    // Gets or sets the total number of allowed open HTTP requests (threads)
    this.MaxHttpConnectionLimit = 80;
    // Gets or sets minimum polite delay (pause)between request to a given domain or IP address.
    this.RateLimitPerHost = TimeSpan.FromMilliseconds(50);            
    //     Gets or sets the allowed number of concurrent HTTP requests (threads) per hostname
    //     or IP address. This helps protect hosts against too many requests.
    this.OpenConnectionLimitPerHost = 25;
    this.ObeyRobotsDotTxt = false;
    //     Makes the WebSraper intelligently throttle requests not only by hostname, but
    //     also by host servers' IP addresses. This is polite in-case multiple scraped domains
    //     are hosted on the same machine.
    this.ThrottleMode = Throttle.ByDomainHostName;

    this.Request("https://www.Website.com", Parse);
}
public override void Init()
{
    License.LicenseKey = "LicenseKey";
    this.LoggingLevel = WebScraper.LogLevel.All;
    this.WorkingDirectory = AppSetting.GetAppRoot() + @"\ShoppingSiteSample\Output\";
    // Gets or sets the total number of allowed open HTTP requests (threads)
    this.MaxHttpConnectionLimit = 80;
    // Gets or sets minimum polite delay (pause)between request to a given domain or IP address.
    this.RateLimitPerHost = TimeSpan.FromMilliseconds(50);            
    //     Gets or sets the allowed number of concurrent HTTP requests (threads) per hostname
    //     or IP address. This helps protect hosts against too many requests.
    this.OpenConnectionLimitPerHost = 25;
    this.ObeyRobotsDotTxt = false;
    //     Makes the WebSraper intelligently throttle requests not only by hostname, but
    //     also by host servers' IP addresses. This is polite in-case multiple scraped domains
    //     are hosted on the same machine.
    this.ThrottleMode = Throttle.ByDomainHostName;

    this.Request("https://www.Website.com", Parse);
}
Public Overrides Sub Init()
	License.LicenseKey = "LicenseKey"
	Me.LoggingLevel = WebScraper.LogLevel.All
	Me.WorkingDirectory = AppSetting.GetAppRoot() & "\ShoppingSiteSample\Output\"
	' Gets or sets the total number of allowed open HTTP requests (threads)
	Me.MaxHttpConnectionLimit = 80
	' Gets or sets minimum polite delay (pause)between request to a given domain or IP address.
	Me.RateLimitPerHost = TimeSpan.FromMilliseconds(50)
	'     Gets or sets the allowed number of concurrent HTTP requests (threads) per hostname
	'     or IP address. This helps protect hosts against too many requests.
	Me.OpenConnectionLimitPerHost = 25
	Me.ObeyRobotsDotTxt = False
	'     Makes the WebSraper intelligently throttle requests not only by hostname, but
	'     also by host servers' IP addresses. This is polite in-case multiple scraped domains
	'     are hosted on the same machine.
	Me.ThrottleMode = Throttle.ByDomainHostName

	Me.Request("https://www.Website.com", Parse)
End Sub
VB   C#

Throttling properties

  • MaxHttpConnectionLimit
    total number of allowed open HTTP requests (threads)
  • RateLimitPerHost
    minimum polite delay or pause (in milliseconds) between request to a given domain or IP address
  • OpenConnectionLimitPerHost
    allowed number of concurrent HTTP requests (threads)
  • ThrottleMode
    Makes the WebSraper intelligently throttle requests not only by hostname, but also by host servers' IP addresses. This is polite in-case multiple scraped domains are hosted on the same machine.

Appendix

How to Create a Windows Form Application?

We should use Visual Studio 2013 or higher for this.

Follow these steps to create new a Windows Forms Project:

  1. Open Visual Studio

    Enterprise2015 related to How to Create a Windows Form Application?

  2. File -> New -> Project

    FileNewProject related to How to Create a Windows Form Application?

  3. From Template, Choose the programming language (Visual C# or VB) -> Windows -> Windows Forms Application

    CreateWindowsApp related to How to Create a Windows Form Application?

Name of project: IronScraperSample
Location: Choose a location on your hard disk

WindowsAppMainScreen related to How to Create a Windows Form Application?

How to Create a Web Form Application?

You should use Visual Studio 2013 or higher for this.

Follow the steps to Create a New Asp.NET Web forms Project

  1. Open Visual Studio

    Enterprise2015 related to How to Create a Web Form Application?

  2. File -> New -> Project

    FileNewProject related to How to Create a Web Form Application?

  3. From Template Choose programming language (Visual C# or VB) -> Web -> ASP.NET Web Application (.NET Framework).

    ASPNETWebApplication related to How to Create a Web Form Application?

Name of project: IronScraperSample
Location: Choose a location from your hard disk

  1. From your ASP.NET Templates

    1. Select Empty Template
    2. Check Web Forms
    3. ASPNETTemplates related to How to Create a Web Form Application?

  2. Now your basic ASP.NET web form Project is Created

    ASPNETWebFormProject related to How to Create a Web Form Application?

Click here to download the full tutorial sample project code project.

.NET Software Engineer For many this is the most efficient way to generate PDF files from .NET, because there is no additional API to learn, or complex design system to navigate

Ahmed Aboelmagd

.NET Software Solution Architect at an Multinational IT Company

Ahmed is an experienced and certified Microsoft Technology specialist with more than 10 years experience in IT and Software development. He's worked in a range of companies, and is now a country manager for multinational IT Company.

Ahmed has been working with IronPDF and IronWebScraper for over a year now, using them in multiple projects with his company.