Skip to footer content
USING IRONXL

C# Read CSV File into DataTable: Complete Developer Guide

Converting CSV files to DataTable in C# is simple with IronXL: use WorkBook.LoadCSV() to read any CSV file, then call ToDataTable(true) to create a structured DataTable with proper column headers, ready for database import or data manipulation.

Working with CSV files is a common task for .NET developers, whether you're importing sales reports, processing inventory, or syncing customer records. Converting a CSV file into a DataTable makes it easy to manipulate, analyze, or insert into a database table. The IronXL library provides a robust solution for handling Excel and CSV operations in C# applications.

Reading CSV files in C# can be tricky—large files, different delimiters, or embedded commas often require complex parsing or Excel. IronXL simplifies this: with just a few lines of code, you can read CSV files from any file path, convert them into a DataTable with proper column headers, and prepare them for bulk database operations—all without Excel. This approach is particularly useful when working with Excel in C# without Interop.

In this guide, you'll learn how to:

  • Load a CSV file into a DataTable in C# using IronXL's CSV reading capabilities
  • Handle different delimiters like commas, tabs, or semicolons
  • Import your DataTable directly into SQL Server efficiently using export functionality
  • Manage large datasets reliably without running into memory issues

By the end, you'll have a complete, practical workflow for turning CSV data into actionable information using IronXL in your .NET applications, whether you're deploying to Azure or running on Linux environments.

Why Is CSV to DataTable Conversion Essential?

CSV to DataTable conversion allows developers to transform comma-separated values (CSV data) into structured objects like a DataTable for further processing. Whether you're handling inventory data, customer records, or transaction logs, being able to convert CSV to a new DataTable efficiently is crucial. Using the first row as column headers, you can ensure that datatable columns align with your database table schema, making it perfect for Excel to SQL operations.

Traditional approaches often struggle with large files, different delimiters, or memory management. IronXL eliminates these challenges and handles different delimiters, quoted fields, and embedded commas seamlessly. IronXL's CSV reading capabilities eliminate these common pain points while providing additional features like data validation and file size optimization.

When Should I Use DataTable Over Other Data Structures?

DataTables excel when you need database-like operations in memory. They're ideal for scenarios involving SQL Server imports, data binding to UI controls, or when you need to perform LINQ queries on structured data. Unlike simple arrays or lists, DataTables provide schema validation, relationships between tables, and seamless integration with ADO.NET. For more complex scenarios, you can also convert between DataSet and DataTable as needed.

What Are Common Issues With Traditional CSV Parsing?

Manual CSV parsing often fails with edge cases: embedded commas within quoted fields, line breaks in cell values, or inconsistent delimiter usage. Memory consumption becomes problematic with large files when loading everything into memory at once. Character encoding issues can corrupt international data, while type inference often misinterprets numeric strings as numbers. These challenges are why IronXL's robust parsing engine has become essential for production applications.

How Does IronXL Solve Memory Management Problems?

IronXL implements streaming and buffered reading techniques to handle large CSV files efficiently. Instead of loading entire files into memory, it processes data in chunks, maintaining a small memory footprint even with gigabyte-sized files. The library also provides performance optimizations that reduce memory usage by up to 95% compared to traditional approaches. This makes it suitable for server environments with limited resources.

How to Install IronXL?

Getting started with IronXL requires just a simple NuGet package installation. Open the NuGet Package Manager Console in Visual Studio and execute:

Install-Package IronXL.Excel

Once installed, add the IronXL namespace to your project:

using IronXL;
using System.Data;
using IronXL;
using System.Data;
$vbLabelText   $csharpLabel

This provides access to all the powerful CSV processing capabilities without any Excel dependencies. For more details on IronXL's extensive API, refer to the API Reference Documentation. If you're using Docker containers, additional configuration steps may be required.

What Are the System Requirements for IronXL?

IronXL supports .NET Framework 4.6.2+ and .NET Core/5/6/7/8+, making it compatible with modern and legacy applications. It runs on Windows, Linux, and macOS platforms. The library requires approximately 50MB of disk space and minimal RAM for basic operations. For Azure Functions or containerized environments, ensure your deployment includes the necessary runtime dependencies.

How Do I Verify the Installation Was Successful?

Create a simple test program that loads a CSV file to verify installation. If you encounter licensing messages, you'll need to apply a license key. Check the package references in your project file to ensure IronXL appears with the correct version. Run a basic CSV read operation—if it completes without errors, your installation is successful. For web applications, you may need to configure the license in web.config.

Why Choose NuGet Over Manual Installation?

NuGet automatically manages dependencies, ensuring all required libraries are installed correctly. It simplifies version updates—one command updates IronXL and all dependencies. NuGet integration with Visual Studio provides IntelliSense support immediately after installation. Package restore functionality ensures team members get the same version automatically. For enterprise environments requiring specific versions, NuGet also supports private package feeds.

NuGet Install with NuGet

PM >  Install-Package IronXL.Excel

Check out IronXL on NuGet for quick installation. With over 10 million downloads, it’s transforming PDF development with C#. You can also download the DLL.

How to Convert CSV to DataTable?

The beauty of IronXL lies in its simplicity. Here's how to read a CSV file and convert it to a DataTable:

// Load CSV file into a WorkBook object
WorkBook workbook = WorkBook.LoadCSV("sales_data.csv", 
    fileFormat: ExcelFileFormat.XLSX);

// Access the default worksheet
WorkSheet worksheet = workbook.DefaultWorkSheet;

// Convert to DataTable with headers
DataTable dataTable = worksheet.ToDataTable(true);

// Display the data
foreach (DataRow row in dataTable.Rows)
{
    foreach (var item in row.ItemArray)
    {
        Console.Write($"{item}\t");
    }
    Console.WriteLine();
}
// Load CSV file into a WorkBook object
WorkBook workbook = WorkBook.LoadCSV("sales_data.csv", 
    fileFormat: ExcelFileFormat.XLSX);

// Access the default worksheet
WorkSheet worksheet = workbook.DefaultWorkSheet;

// Convert to DataTable with headers
DataTable dataTable = worksheet.ToDataTable(true);

// Display the data
foreach (DataRow row in dataTable.Rows)
{
    foreach (var item in row.ItemArray)
    {
        Console.Write($"{item}\t");
    }
    Console.WriteLine();
}
$vbLabelText   $csharpLabel

The LoadCSV method intelligently parses your CSV file, automatically handling common cases such as quoted fields and embedded commas. The fileFormat parameter tells IronXL how to interpret the CSV data internally, ensuring it's treated as an Excel-compatible structure. Once loaded, the ToDataTable(true) method converts the worksheet into a DataTable, with the parameter true indicating that the first row should be used as column headers. This results in a clean, structured DataTable with named columns that match your CSV headers. For more advanced manipulation options, explore the complete Excel to DataTable tutorial.

The conversion process also preserves data types intelligently—numbers remain numeric, dates are parsed correctly, and text fields maintain their formatting. This automatic type inference saves considerable development time compared to manual parsing approaches. For scenarios requiring custom data formatting, you can apply cell data formats before conversion.

What Does the ToDataTable Parameter Control?

The boolean parameter in ToDataTable() determines header row handling. When set to true, the first row becomes column names in your DataTable, creating meaningful field references like "CustomerName" instead of "Column1". When false, generic column names are assigned, useful for headerless CSV files. This flexibility supports various CSV formats encountered in real-world applications. Advanced scenarios might require managing worksheets to handle multiple data ranges.

How Can I Handle CSV Files Without Headers?

For headerless CSV files, use ToDataTable(false) and manually assign column names afterward. You can iterate through the Columns collection and set meaningful names based on your data schema. Alternatively, prepend a header row to your CSV programmatically before loading. IronXL also supports creating spreadsheets from scratch if you need to add headers to existing data.

What Are the Performance Implications of Large CSV Files?

IronXL processes large CSV files efficiently using streaming techniques. Files under 100MB typically load in seconds, while gigabyte-sized files benefit from the library's performance enhancements. Memory usage remains constant regardless of file size due to intelligent buffering. For optimal performance with massive datasets, consider processing in batches using range selection techniques.

How to Import DataTable into SQL Server?

Once you have your DataTable, importing it into SQL Server becomes remarkably efficient using SqlBulkCopy:

using System.Data;
using Microsoft.Data.SqlClient;
using IronXL;

class Program
{
    static void Main(string[] args)
    {
        // Create connection string
        string connectionString = @"Data Source=localhost;Initial Catalog=SalesDB;
                           Integrated Security=True;TrustServerCertificate=True;";

        // Read CSV into DataTable
        WorkBook workbook = WorkBook.LoadCSV("inventory_report.csv");
        DataTable dataTable = workbook.DefaultWorkSheet.ToDataTable(true);

        // Bulk insert into SQL Server
        using (SqlConnection connection = new SqlConnection(connectionString))
        {
            connection.Open();
            using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
            {
                // Set destination table name
                bulkCopy.DestinationTableName = "dbo.Inventory";

                // Map DataTable columns to SQL table columns
                bulkCopy.ColumnMappings.Add("ProductCode", "product_code");
                bulkCopy.ColumnMappings.Add("Quantity", "quantity");
                bulkCopy.ColumnMappings.Add("LastUpdated", "last_updated");

                // Set batch size for better performance
                bulkCopy.BatchSize = 1000;

                // Write data to SQL Server
                bulkCopy.WriteToServer(dataTable);
            }
        }

        Console.WriteLine($"Successfully imported {dataTable.Rows.Count} records");
    }
}
using System.Data;
using Microsoft.Data.SqlClient;
using IronXL;

class Program
{
    static void Main(string[] args)
    {
        // Create connection string
        string connectionString = @"Data Source=localhost;Initial Catalog=SalesDB;
                           Integrated Security=True;TrustServerCertificate=True;";

        // Read CSV into DataTable
        WorkBook workbook = WorkBook.LoadCSV("inventory_report.csv");
        DataTable dataTable = workbook.DefaultWorkSheet.ToDataTable(true);

        // Bulk insert into SQL Server
        using (SqlConnection connection = new SqlConnection(connectionString))
        {
            connection.Open();
            using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
            {
                // Set destination table name
                bulkCopy.DestinationTableName = "dbo.Inventory";

                // Map DataTable columns to SQL table columns
                bulkCopy.ColumnMappings.Add("ProductCode", "product_code");
                bulkCopy.ColumnMappings.Add("Quantity", "quantity");
                bulkCopy.ColumnMappings.Add("LastUpdated", "last_updated");

                // Set batch size for better performance
                bulkCopy.BatchSize = 1000;

                // Write data to SQL Server
                bulkCopy.WriteToServer(dataTable);
            }
        }

        Console.WriteLine($"Successfully imported {dataTable.Rows.Count} records");
    }
}
$vbLabelText   $csharpLabel

The SqlBulkCopy class provides exceptional performance for large-scale data imports. The ColumnMappings collection allows you to map DataTable columns to database columns with different names, ensuring flexibility in your data schema. The BatchSize property optimizes memory usage by processing records in chunks rather than loading everything at once. This approach can handle millions of records efficiently, making it ideal for enterprise-scale data operations. Learn more about IronXL's DataSet and DataTable capabilities.

For additional data integrity, consider implementing transaction support around your bulk copy operations. This ensures that partial imports can be rolled back if errors occur. You can also leverage IronXL's data validation features to pre-validate data before import.

Why Is BatchSize Important for Performance?

BatchSize controls how many rows are sent to SQL Server in each network round trip. Smaller batches (100–1000 rows) reduce memory usage and allow for progress monitoring but increase network overhead. Larger batches (5000–10000 rows) maximize throughput for fast networks but consume more memory. The optimal size depends on your row width, network latency, and available memory. For complex scenarios involving multiple Excel ranges, batch processing becomes even more critical.

How Do I Handle Column Mapping Mismatches?

Column mapping mismatches cause runtime exceptions during bulk copy operations. Always verify that source DataTable column names match your mapping definitions exactly, including case sensitivity. Use the GetOrdinal method to validate column existence before mapping. For dynamic schemas, query the destination table's structure and build mappings programmatically. IronXL's worksheet management features can help standardize column names before import.

What Security Considerations Apply to Bulk Imports?

Bulk imports require elevated database permissions, typically db_datawriter or bulkadmin roles. Use integrated authentication when possible to avoid embedding credentials in connection strings. Implement row-level security by adding audit columns during import. Consider encrypting sensitive workbooks before processing. Always validate data types and ranges to prevent SQL injection through malformed CSV data.

How Can I Monitor Import Progress?

For long-running imports, implement the SqlRowsCopied event to track progress. This event fires after each batch, allowing you to update progress bars or log completion percentages. Calculate estimated time remaining based on rows processed versus total rows. For critical imports, log batch completions to a separate tracking table. Consider using IronXL's performance monitoring features to identify bottlenecks.

Here, we can use the query *SELECT FROM dbo.Inventory;** in SSMS to check that our CSV data actually got imported into the SQL Server. First, let's take a look at the data we used in our example CSV file:

Notepad window showing a sample CSV inventory report with product codes, quantities, and dates in comma-separated format

Now, if you look at the output for our query, we can see it has successfully copied every column, header, and row into the database table.

SQL query results showing 3 inventory records imported from CSV with product codes A100, B200, and C300, their quantities, and timestamps in a database management tool

How to Handle Different Delimiters?

Not all CSV files use commas. IronXL seamlessly handles various delimiters, making it perfect for international data formats or legacy systems:

using IronXL;
using System;
using System.Data;
using System.IO;

class Program
{
    static void Main()
    {
        // --- Tab-delimited file ---
        string tsvPath = @"export_data.tsv";
        WorkBook tsvWorkbook = WorkBook.LoadCSV(tsvPath, ExcelFileFormat.XLSX, "\t");

        // --- Semicolon-delimited file ---
        string semiPath = @"european_data.csv";
        string tempCsv = Path.Combine(Path.GetTempPath(), "european_data_comma.csv");

        // Replace semicolons with commas for proper parsing
        var lines = File.ReadAllLines(semiPath);
        for (int i = 0; i < lines.Length; i++)
        {
            lines[i] = lines[i].Replace(';', ',');
        }
        File.WriteAllLines(tempCsv, lines);

        WorkBook semiWorkbook = WorkBook.LoadCSV(tempCsv, ExcelFileFormat.XLSX);

        // Print both workbooks
        PrintWorkbook(tsvWorkbook, "Tab-delimited File");
        PrintWorkbook(semiWorkbook, "Semicolon-delimited File");
    }

    static void PrintWorkbook(WorkBook workbook, string name)
    {
        Console.WriteLine($"\n--- {name} ---\n");
        DataTable dt = workbook.DefaultWorkSheet.ToDataTable(true);

        // Print column headers
        foreach (DataColumn col in dt.Columns)
        {
            Console.Write($"{col.ColumnName}\t");
        }
        Console.WriteLine();

        // Print rows
        foreach (DataRow row in dt.Rows)
        {
            foreach (var item in row.ItemArray)
            {
                Console.Write($"{item}\t");
            }
            Console.WriteLine();
        }
    }
}
using IronXL;
using System;
using System.Data;
using System.IO;

class Program
{
    static void Main()
    {
        // --- Tab-delimited file ---
        string tsvPath = @"export_data.tsv";
        WorkBook tsvWorkbook = WorkBook.LoadCSV(tsvPath, ExcelFileFormat.XLSX, "\t");

        // --- Semicolon-delimited file ---
        string semiPath = @"european_data.csv";
        string tempCsv = Path.Combine(Path.GetTempPath(), "european_data_comma.csv");

        // Replace semicolons with commas for proper parsing
        var lines = File.ReadAllLines(semiPath);
        for (int i = 0; i < lines.Length; i++)
        {
            lines[i] = lines[i].Replace(';', ',');
        }
        File.WriteAllLines(tempCsv, lines);

        WorkBook semiWorkbook = WorkBook.LoadCSV(tempCsv, ExcelFileFormat.XLSX);

        // Print both workbooks
        PrintWorkbook(tsvWorkbook, "Tab-delimited File");
        PrintWorkbook(semiWorkbook, "Semicolon-delimited File");
    }

    static void PrintWorkbook(WorkBook workbook, string name)
    {
        Console.WriteLine($"\n--- {name} ---\n");
        DataTable dt = workbook.DefaultWorkSheet.ToDataTable(true);

        // Print column headers
        foreach (DataColumn col in dt.Columns)
        {
            Console.Write($"{col.ColumnName}\t");
        }
        Console.WriteLine();

        // Print rows
        foreach (DataRow row in dt.Rows)
        {
            foreach (var item in row.ItemArray)
            {
                Console.Write($"{item}\t");
            }
            Console.WriteLine();
        }
    }
}
$vbLabelText   $csharpLabel

This code demonstrates loading CSV files with different delimiters into IronXL WorkBook objects. Tab-delimited files are read using "\t" as the separator, while semicolon-delimited files are converted to standard CSV format before loading. Each workbook's default worksheet is converted into a DataTable with ToDataTable(true), which uses the first row as column headers. For more complex delimiter scenarios, you can also explore converting between spreadsheet file types.

The PrintWorkbook method then iterates through the DataTable and displays the data in the console, including the headers and all rows. This flexibility addresses many of the common CSV parsing challenges discussed by developers. When working with European data formats, IronXL's delimiter handling proves particularly valuable, as it accommodates regional differences in data formatting standards.

Which Delimiters Does IronXL Support Natively?

IronXL natively supports common delimiters including commas, tabs, pipes (|), and custom single-character delimiters through the LoadCSV overload. For optimal performance, use the built-in delimiter parameter rather than pre-processing files. Multi-character delimiters require preprocessing as shown in the semicolon example above. The library also handles Excel's native formats if you need to convert between CSV and XLSX.

How Can I Detect the Delimiter Automatically?

Automatic delimiter detection involves analyzing the first few lines of your CSV file. Count occurrences of common delimiters (comma, tab, semicolon, pipe) in each line. The character appearing consistently with the highest frequency is likely your delimiter. IronXL doesn't provide automatic detection, but you can implement this logic before calling LoadCSV. For files with inconsistent delimiters, consider validating data quality first.

What About Multi-Character Delimiters?

Multi-character delimiters like "||" or "::" require preprocessing since LoadCSV accepts single-character delimiters only. Replace multi-character sequences with a unique single character not present in your data (like \x01). After replacement, load the file with your chosen delimiter. This approach maintains data integrity while working within IronXL's delimiter constraints. For complex transformations, consider using IronXL's cell manipulation features post-import.

As you can see in our console output, both files are correctly formatted into data tables, despite starting out with different delimiters.

Microsoft Visual Studio Debug Console showing output of two different file formats—a tab-delimited file with fruit data (ID, Name, Quantity) and a semicolon-delimited file with city population data

What Are the Next Steps?

IronXL transforms the complex task of CSV database imports into a straightforward, reliable process. With just a few lines of code, you can:

  • Read CSV files from any file path using LoadCSV functionality
  • Convert CSV data into a new DataTable with proper data formatting
  • Preserve column headers and datatable columns through intelligent parsing
  • Import millions of rows into a database table efficiently using bulk operations

The library handles the intricacies of CSV parsing, memory management, and data type conversions automatically, letting you focus on your business logic rather than file processing details. Whether you're building ASP.NET applications, working with Blazor, or developing .NET MAUI apps, IronXL provides consistent, reliable CSV processing across all platforms.

Ready to streamline your data import workflow? Start your free trial to find the perfect fit for your team's needs. With comprehensive documentation, extensive code examples, and enterprise support, you'll be processing CSV files like a pro in no time.

Frequently Asked Questions

What is the best way to read CSV files into a DataTable in C#?

Using IronXL, you can efficiently read CSV files into a DataTable in C# with simple code examples provided in our developer guide.

Why should I use IronXL for CSV to DataTable conversion?

IronXL offers a straightforward API for parsing CSV files and converting them into DataTables, allowing you to easily manipulate and analyze your data in C#.

Can IronXL handle large CSV files when converting to DataTables?

Yes, IronXL is designed to efficiently process large CSV files and convert them into DataTables without performance issues.

Does IronXL support data manipulation after converting CSV to DataTable?

Absolutely, once you convert a CSV file into a DataTable using IronXL, you can easily manipulate and analyze the data as needed.

How can I import CSV data into a database using IronXL?

After converting your CSV file into a DataTable with IronXL, you can insert the data into a database using standard database connectivity in C#.

Is IronXL suitable for processing CSV files in enterprise applications?

Yes, IronXL is designed to handle CSV processing tasks in enterprise applications, offering robust performance and reliability.

What are the advantages of converting CSV files to DataTables in C#?

Converting CSV files to DataTables allows for easier data manipulation, analysis, and integration with databases, enhancing your application's data handling capabilities.

Can IronXL be used for other spreadsheet file types besides CSV?

Yes, IronXL supports various spreadsheet file types including Excel formats, enabling versatile data processing capabilities in C#.

Jordi Bardia
Software Engineer
Jordi is most proficient in Python, C# and C++, when he isn’t leveraging his skills at Iron Software; he’s game programming. Sharing responsibilities for product testing, product development and research, Jordi adds immense value to continual product improvement. The varied experience keeps him challenged and engaged, and he ...
Read More