Skip to footer content
USING IRONOCR

How OCR with Computer Vision Enhances Text Recognition Accuracy

Extracting text from images sounds straightforward until the document arrives crooked, faded, or captured under poor lighting. This is where computer vision transforms optical character recognition from a fragile process into a reliable one. By applying intelligent image analysis before data extraction, OCR systems can achieve recognition accuracy that approaches human-level performance across scanned documents that would otherwise produce garbled results.

OCR with computer vision has become a foundational technology for digital transformation initiatives, eliminating manual data entry across diverse document types. This guide explores how these techniques integrate to dramatically improve text recognition in .NET applications. From preprocessing filters that correct poor-quality scans to the neural network architectures powering modern OCR engines, understanding these concepts enables you to build document processing systems that handle real-world input images gracefully.

To follow along with the code examples below, install IronOCR via NuGet:

dotnet add package IronOcr
dotnet add package IronOcr
SHELL

Or use the NuGet Package Manager Console:

Install-Package IronOcr
Install-Package IronOcr
SHELL

Visit the IronOCR NuGet package page to confirm the latest version before installing.

What is the Relationship Between Computer Vision and OCR?

Computer vision encompasses the broader field of teaching machines to interpret visual information, while OCR specifically focuses on converting printed or handwritten text within an image file into machine-encoded text. Optical character recognition operates as a specialized application within computer vision, drawing on many of the same underlying techniques for image analysis and pattern recognition.

The modern OCR pipeline consists of three interconnected stages. Text detection identifies text regions within a scanned image that contain individual characters, isolating these areas from backgrounds, graphics, and other visual elements. Image preprocessing then enhances these detected regions, correcting distortions and improving contrast to make character images more distinguishable. Finally, character recognition applies pattern matching and neural network inference to convert the visual representation of each stored glyph into its corresponding digital text.

Traditional OCR technology struggled when any of these stages encountered imperfect input. A slightly rotated scan might produce complete nonsense, while low-resolution input images or printed documents with background patterns often failed entirely. Computer vision techniques address these limitations by making each pipeline stage more adaptive and resilient, enabling successful recognition across business documents, bank statements, and even handwritten notes.

The fastest way to see OCR working in your .NET project is to run a basic recognition pass:

using IronOcr;

// Initialize the optical character reader
var ocr = new IronTesseract();

// Load scanned document or image file
using var input = new OcrInput();
input.LoadImage("document.png");

// Perform text recognition and data extraction
OcrResult result = ocr.Read(input);
Console.WriteLine(result.Text);
using IronOcr;

// Initialize the optical character reader
var ocr = new IronTesseract();

// Load scanned document or image file
using var input = new OcrInput();
input.LoadImage("document.png");

// Perform text recognition and data extraction
OcrResult result = ocr.Read(input);
Console.WriteLine(result.Text);
Imports IronOcr

' Initialize the optical character reader
Dim ocr As New IronTesseract()

' Load scanned document or image file
Using input As New OcrInput()
    input.LoadImage("document.png")

    ' Perform text recognition and data extraction
    Dim result As OcrResult = ocr.Read(input)
    Console.WriteLine(result.Text)
End Using
$vbLabelText   $csharpLabel

The code above demonstrates the simplest OCR workflow using IronOCR. The IronTesseract class provides a managed wrapper around the Tesseract 5 engine, while OcrInput handles image file loading and format conversion. For clean, well-formatted text documents, this basic optical character recognition approach often suffices. However, real-world scanned documents rarely arrive in pristine condition, which is where preprocessing becomes essential for extracting text accurately.

Input

How OCR with Computer Vision Enhances Accuracy in Text Recognition Using IronOCR: Image 1 - Sample Input Image

Output

How OCR with Computer Vision Enhances Accuracy in Text Recognition Using IronOCR: Image 2 - Console Output

How Does Image Preprocessing Improve Text Recognition?

Image preprocessing applies computer vision operations to enhance input quality before the OCR engine analyzes it. These transformations address the most common causes of OCR failures: rotation, noise, low contrast, and insufficient resolution. Each preprocessing technique targets a specific image defect, and combining them strategically can rescue printed documents and scanned images that would otherwise be unreadable.

Deskewing corrects rotational misalignment that occurs when documents are scanned at an angle. Even a slight rotation significantly impacts OCR accuracy because optical character recognition software expects text lines to run horizontally. The deskew operation analyzes text line angles and applies a corrective rotation to align content.

Noise reduction removes digital artifacts, speckles, and scanner-introduced distortions that can be misinterpreted as individual characters. Background patterns, dust marks, and compression artifacts all create noise that interferes with accurate character segmentation in the original image.

Binarization converts images to pure black and white, eliminating color information and grayscale gradients. This simplification helps the recognition engine distinguish printed text from background more definitively, particularly in documents with colored paper or faded printing, where identifying letters becomes challenging.

Resolution enhancement increases pixel density for poor-quality scans or photographs. Higher resolution provides more detail for the OCR software to analyze, improving its ability to distinguish between similar-looking characters and enabling successful recognition even on degraded input.

using IronOcr;

var ocr = new IronTesseract();

// Load poor quality scan for document processing
using var input = new OcrInput();
input.LoadImage("low-quality-scan.jpg");

// Apply preprocessing filters for improved accuracy
input.Deskew();                   // Correct rotational skew in scanned image
input.DeNoise();                  // Remove digital artifacts from input
input.Binarize();                 // Convert to black and white for text extraction
input.EnhanceResolution(300);     // Boost to 300 DPI for single character clarity

OcrResult result = ocr.Read(input);
Console.WriteLine($"Extracted: {result.Text}");
using IronOcr;

var ocr = new IronTesseract();

// Load poor quality scan for document processing
using var input = new OcrInput();
input.LoadImage("low-quality-scan.jpg");

// Apply preprocessing filters for improved accuracy
input.Deskew();                   // Correct rotational skew in scanned image
input.DeNoise();                  // Remove digital artifacts from input
input.Binarize();                 // Convert to black and white for text extraction
input.EnhanceResolution(300);     // Boost to 300 DPI for single character clarity

OcrResult result = ocr.Read(input);
Console.WriteLine($"Extracted: {result.Text}");
Imports IronOcr

Dim ocr As New IronTesseract()

' Load poor quality scan for document processing
Using input As New OcrInput()
    input.LoadImage("low-quality-scan.jpg")

    ' Apply preprocessing filters for improved accuracy
    input.Deskew()                   ' Correct rotational skew in scanned image
    input.DeNoise()                  ' Remove digital artifacts from input
    input.Binarize()                 ' Convert to black and white for text extraction
    input.EnhanceResolution(300)     ' Boost to 300 DPI for single character clarity

    Dim result As OcrResult = ocr.Read(input)
    Console.WriteLine($"Extracted: {result.Text}")
End Using
$vbLabelText   $csharpLabel

This example chains multiple preprocessing filters before performing OCR. The Deskew() method analyzes the document and applies rotational correction, while DeNoise() removes speckles and artifacts from the text image. The Binarize() call converts the scanned image to pure black and white for cleaner text extraction, and EnhanceResolution() boosts the image to 300 DPI -- the recommended minimum for accurate character recognition.

The order of filter application matters. Deskewing should typically occur early in the chain since subsequent filters work better on properly aligned images. Noise reduction before binarization helps prevent artifacts from being permanently encoded into the black-and-white conversion. Experimenting with filter combinations for specific document types often reveals the optimal sequence for a given use case, whether the OCR application processes invoices, receipts, patient records, or scanned contracts.

How Do You Choose the Right Preprocessing Filter Combination?

Choosing the right filter combination depends on the nature of the input document. Camera-captured images with perspective distortion benefit from deskewing first, then denoising. Faxed or photocopied documents often require aggressive binarization to cut through gray halos around characters. Low-resolution scans need resolution enhancement before any other filter, because upscaling before denoising avoids amplifying compression artifacts.

A practical approach is to categorize your document sources -- scanner, camera, fax, PDF rasterization -- and apply a tailored filter chain for each. IronOCR supports chaining as many filters as needed in a single OcrInput pass, so you can define per-source profiles in configuration and apply them at runtime without rewriting recognition logic.

Which Deep Learning Models Power Modern OCR?

Contemporary OCR engines rely on deep learning architectures that have revolutionized text recognition accuracy. Unlike traditional approaches that matched characters against predefined templates, neural network-based OCR models learn to recognize text patterns from vast training datasets, enabling them to handle font variations, handwriting styles, and degraded images far more effectively. This machine learning approach powers today's most capable OCR solutions.

The recognition pipeline typically combines two neural network types. Convolutional Neural Networks (CNNs) excel at feature extraction from images. These networks process the input image through multiple layers that progressively identify increasingly complex patterns -- from basic edges and curves to complete character shapes. The CNN produces a feature map that encodes the visual characteristics of the text region, handling both printed text and handwritten text with improved accuracy.

Long Short-Term Memory (LSTM) networks then process these features as a sequence, recognizing that digital text flows in a specific order. LSTMs maintain memory of previous inputs, allowing them to understand context and handle the sequential nature of written language. This combination -- often called CRNN (Convolutional Recurrent Neural Network) -- forms the backbone of modern OCR accuracy and enables intelligent character recognition across multiple languages.

The Tesseract 5 engine that powers IronOCR implements this LSTM-based architecture, representing a significant advancement over earlier versions that relied purely on traditional pattern recognition. The neural network approach handles specific fonts, partial occlusions, and image degradation that would defeat template-based OCR systems.

using IronOcr;

var ocr = new IronTesseract();

// Configure OCR engine for multilingual text recognition
ocr.Language = OcrLanguage.English;  // IronOCR supports 125+ languages

// Process PDF with mixed handwriting styles and printed text
using var input = new OcrInput("web-report.pdf");
input.Deskew();

OcrResult result = ocr.Read(input);

// Access detailed recognition data including text regions
foreach (var page in result.Pages)
{
    Console.WriteLine($"Page {page.PageNumber}: {page.Text}");
}
using IronOcr;

var ocr = new IronTesseract();

// Configure OCR engine for multilingual text recognition
ocr.Language = OcrLanguage.English;  // IronOCR supports 125+ languages

// Process PDF with mixed handwriting styles and printed text
using var input = new OcrInput("web-report.pdf");
input.Deskew();

OcrResult result = ocr.Read(input);

// Access detailed recognition data including text regions
foreach (var page in result.Pages)
{
    Console.WriteLine($"Page {page.PageNumber}: {page.Text}");
}
Imports IronOcr

Dim ocr As New IronTesseract()

' Configure OCR engine for multilingual text recognition
ocr.Language = OcrLanguage.English  ' IronOCR supports 125+ languages

' Process PDF with mixed handwriting styles and printed text
Using input As New OcrInput("web-report.pdf")
    input.Deskew()

    Dim result As OcrResult = ocr.Read(input)

    ' Access detailed recognition data including text regions
    For Each page In result.Pages
        Console.WriteLine($"Page {page.PageNumber}: {page.Text}")
    Next
End Using
$vbLabelText   $csharpLabel

The IronTesseract class provides access to Tesseract 5's neural network capabilities through a clean .NET interface. The OcrResult object returned contains not just the extracted text but structured data, including pages, paragraphs, lines, and individual words with their confidence scores and bounding coordinates.

Input

How OCR with Computer Vision Enhances Accuracy in Text Recognition Using IronOCR: Image 3 - Sample PDF Input

Output

How OCR with Computer Vision Enhances Accuracy in Text Recognition Using IronOCR: Image 4 - OCR Output

This structured output proves valuable for applications beyond simple text extraction. Document processing systems can use word positions to understand complex layouts, while quality assurance workflows can flag low-confidence regions for human review. The neural network architecture makes all of this possible by providing rich metadata alongside the recognized text, enabling AI-based OCR solutions that process large volumes of unstructured data efficiently.

How Does IronOCR Handle Multilingual Documents?

IronOCR ships with support for over 125 languages, each backed by a dedicated Tesseract LSTM language model. You select a language by setting the Language property on IronTesseract before calling Read. For documents mixing two languages -- a German contract with English footnotes, for example -- you can specify multiple languages simultaneously, and the engine applies the most appropriate model per text region.

Language packs are distributed as NuGet packages so you only download the models your application needs. This keeps deployment size manageable for applications targeting a single language while still allowing full multilingual support when required.

How Do You Enable Region-Based OCR for Forms and Tables?

Region-based OCR restricts recognition to defined areas of an image, which is useful when documents contain specific zones of interest such as form fields, invoice line items, or table cells. This targeted approach improves both speed and accuracy by focusing computational resources on relevant content.

using IronOcr;
using IronSoftware.Drawing;

var ocr = new IronTesseract();

using var input = new OcrInput();
input.LoadImage("invoice.jpg");

// Define a crop region for the total amount field (x, y, width, height in pixels)
var totalRegion = new CropRectangle(x: 600, y: 800, width: 300, height: 50);
input.AddRegion(totalRegion);

OcrResult result = ocr.Read(input);
Console.WriteLine($"Invoice total: {result.Text}");
using IronOcr;
using IronSoftware.Drawing;

var ocr = new IronTesseract();

using var input = new OcrInput();
input.LoadImage("invoice.jpg");

// Define a crop region for the total amount field (x, y, width, height in pixels)
var totalRegion = new CropRectangle(x: 600, y: 800, width: 300, height: 50);
input.AddRegion(totalRegion);

OcrResult result = ocr.Read(input);
Console.WriteLine($"Invoice total: {result.Text}");
Imports IronOcr
Imports IronSoftware.Drawing

Dim ocr As New IronTesseract()

Using input As New OcrInput()
    input.LoadImage("invoice.jpg")

    ' Define a crop region for the total amount field (x, y, width, height in pixels)
    Dim totalRegion As New CropRectangle(x:=600, y:=800, width:=300, height:=50)
    input.AddRegion(totalRegion)

    Dim result As OcrResult = ocr.Read(input)
    Console.WriteLine($"Invoice total: {result.Text}")
End Using
$vbLabelText   $csharpLabel

Combining region-based OCR with confidence thresholds gives you fine-grained control over data quality. For financial documents and legal materials, flagging any word below an 85% confidence level for secondary review is a practical baseline. You can tune the threshold per document type based on the quality of scans arriving from each source.

Learn more about region-based OCR and crop rectangles in the IronOCR documentation.

How Can Developers Optimize OCR Accuracy Programmatically?

Beyond applying standard preprocessing filters, you can fine-tune how OCR performs for specific document types and quality requirements. Confidence scoring, automatic filter optimization, and searchable PDF generation all contribute to maximizing recognition accuracy in production applications that must recognize text reliably across diverse document types.

Confidence scores indicate how certain the engine is about each recognized element. Analyzing these scores helps identify problematic areas that may need manual verification or alternative processing approaches. Applications can set confidence thresholds below which results are flagged for review -- essential for sensitive documents that require high accuracy.

using IronOcr;

var ocr = new IronTesseract();

// Load business document for OCR processing
using var input = new OcrInput("receipt.jpg");

// Let the system determine optimal preprocessing for OCR accuracy
string suggestedCode = OcrInputFilterWizard.Run(
    "receipt.jpg",
    out double confidence,
    ocr);

Console.WriteLine($"Achieved confidence: {confidence:P1}");
Console.WriteLine($"Optimal filter chain: {suggestedCode}");

// Apply recommended filters for successful recognition
input.DeNoise();
input.Deskew();

OcrResult result = ocr.Read(input);

// Analyze word-level confidence for extracted text
foreach (var word in result.Words)
{
    if (word.Confidence < 0.85)
    {
        Console.WriteLine($"Low confidence: '{word.Text}' ({word.Confidence:P0})");
    }
}
using IronOcr;

var ocr = new IronTesseract();

// Load business document for OCR processing
using var input = new OcrInput("receipt.jpg");

// Let the system determine optimal preprocessing for OCR accuracy
string suggestedCode = OcrInputFilterWizard.Run(
    "receipt.jpg",
    out double confidence,
    ocr);

Console.WriteLine($"Achieved confidence: {confidence:P1}");
Console.WriteLine($"Optimal filter chain: {suggestedCode}");

// Apply recommended filters for successful recognition
input.DeNoise();
input.Deskew();

OcrResult result = ocr.Read(input);

// Analyze word-level confidence for extracted text
foreach (var word in result.Words)
{
    if (word.Confidence < 0.85)
    {
        Console.WriteLine($"Low confidence: '{word.Text}' ({word.Confidence:P0})");
    }
}
Imports IronOcr

Dim ocr As New IronTesseract()

' Load business document for OCR processing
Using input As New OcrInput("receipt.jpg")

    ' Let the system determine optimal preprocessing for OCR accuracy
    Dim confidence As Double
    Dim suggestedCode As String = OcrInputFilterWizard.Run("receipt.jpg", confidence, ocr)

    Console.WriteLine($"Achieved confidence: {confidence:P1}")
    Console.WriteLine($"Optimal filter chain: {suggestedCode}")

    ' Apply recommended filters for successful recognition
    input.DeNoise()
    input.Deskew()

    Dim result As OcrResult = ocr.Read(input)

    ' Analyze word-level confidence for extracted text
    For Each word In result.Words
        If word.Confidence < 0.85 Then
            Console.WriteLine($"Low confidence: '{word.Text}' ({word.Confidence:P0})")
        End If
    Next
End Using
$vbLabelText   $csharpLabel

The OcrInputFilterWizard analyzes an image and tests various filter combinations to determine which preprocessing chain produces the highest confidence results. This automated approach eliminates guesswork when handling unfamiliar document types. The wizard returns both the achieved confidence level and the code needed to reproduce the optimal configuration, streamlining OCR application development for business processes.

The word-level confidence analysis demonstrated in the loop provides a granular quality assessment. Applications processing financial documents, patient records, or legal materials often require this level of scrutiny to ensure extracted data meets accuracy standards. Words falling below the confidence threshold can trigger secondary verification processes or alternative recognition attempts, supporting data management workflows that demand reliability.

How Do You Generate Searchable PDFs from Scanned Documents?

For documents requiring conversion to searchable archives, IronOCR can generate searchable PDFs that embed the recognized text layer beneath the original image, enabling full-text search while preserving visual fidelity. This capability transforms scanned documents into a digital format suitable for long-term archiving, legal discovery workflows, or enterprise content management systems.

using IronOcr;

var ocr = new IronTesseract();

using var input = new OcrInput("scanned-contract.pdf");
input.Deskew();
input.DeNoise();

OcrResult result = ocr.Read(input);

// Export as searchable PDF with embedded text layer
result.SaveAsSearchablePdf("searchable-contract.pdf");
Console.WriteLine("Searchable PDF saved successfully.");
using IronOcr;

var ocr = new IronTesseract();

using var input = new OcrInput("scanned-contract.pdf");
input.Deskew();
input.DeNoise();

OcrResult result = ocr.Read(input);

// Export as searchable PDF with embedded text layer
result.SaveAsSearchablePdf("searchable-contract.pdf");
Console.WriteLine("Searchable PDF saved successfully.");
Imports IronOcr

Dim ocr As New IronTesseract()

Using input As New OcrInput("scanned-contract.pdf")
    input.Deskew()
    input.DeNoise()

    Dim result As OcrResult = ocr.Read(input)

    ' Export as searchable PDF with embedded text layer
    result.SaveAsSearchablePdf("searchable-contract.pdf")
    Console.WriteLine("Searchable PDF saved successfully.")
End Using
$vbLabelText   $csharpLabel

The resulting file retains the visual appearance of the original scan while adding a hidden text layer that search tools and screen readers can access. This is the standard output format for document digitization projects targeting compliance or accessibility requirements.

How Do You Compare OCR Performance Across Document Types?

Different document categories -- printed forms, handwritten notes, low-quality fax transmissions, and high-resolution camera captures -- respond differently to preprocessing and recognition settings. Benchmarking your pipeline against representative samples from each category reveals where accuracy gaps exist and which filters to tune.

OCR preprocessing recommendations by document type
Document Type Recommended Filters Typical Accuracy Improvement Primary Challenge
Flatbed-scanned text Deskew, Binarize 5-15% Slight rotation, shadow edges
Camera-captured document Deskew, DeNoise, EnhanceResolution 20-40% Perspective distortion, noise
Fax / photocopy Binarize, DeNoise 15-30% Halftone patterns, degraded contrast
Low-resolution scan (<150 DPI) EnhanceResolution(300), Deskew 30-50% Insufficient pixel density
Handwritten notes Binarize, DeNoise 10-25% Variable stroke width, style variation

These accuracy improvements are directional estimates based on preprocessing effect research from academic OCR benchmarking studies. Actual results vary depending on scan equipment, document age, and content complexity. Running the OcrInputFilterWizard against your own sample set gives you empirical data specific to your pipeline.

Explore the full list of available IronOCR preprocessing filters to understand all options available to you when tuning a pipeline.

What Are the Key IronOCR Features for Production Document Processing?

When deploying OCR in production, several IronOCR capabilities beyond basic recognition become important for reliability and throughput. Understanding these features helps you design a pipeline that scales without sacrificing accuracy.

Multi-format input support -- IronOCR accepts images (PNG, JPEG, TIFF, BMP, GIF, WEBP), PDF files, and multi-page TIFFs through a single unified API. This means you can handle whatever format arrives from scanning stations, email attachments, or document management systems without writing format-specific code paths.

Thread safety -- The IronTesseract class is thread-safe when you create a single instance and share it across threads. For high-throughput applications, create one instance per thread or use a pool to avoid lock contention on the underlying Tesseract engine.

Barcode and QR code co-processing -- IronOCR can read barcodes and QR codes from the same image in a single pass, removing the need for a separate barcode library when processing mixed-content documents such as shipping labels or product inventory sheets.

Output format options -- Beyond plain text, IronOCR can return structured data in HOCR format, export directly to searchable PDFs, and provide word bounding boxes suitable for downstream data extraction workflows.

Review the complete IronOCR features overview to see all capabilities before finalizing your architecture.

What Are Your Next Steps?

Computer vision techniques fundamentally transform optical character recognition from a technology that only works with perfect input into one capable of handling the messy reality of scanned documents, photographs, and degraded images. The preprocessing stage -- deskewing, denoising, binarization, and resolution enhancement -- addresses physical capture defects, while neural network architectures such as CNN-LSTM provide script-recognition intelligence to accurately interpret varied fonts and handwriting styles.

For .NET developers, IronOCR packages these capabilities into a managed library that simplifies native Tesseract integration while adding practical enhancements for production use. The combination of automatic preprocessing optimization, detailed confidence reporting, and structured result data enables document processing systems that perform reliably across diverse real-world inputs -- from printed documents to handwritten notes -- and support multilingual OCR across multiple languages.

To move forward:

Frequently Asked Questions

How does computer vision improve OCR accuracy?

Computer vision improves OCR accuracy by applying image preprocessing before recognition. Techniques such as deskewing, denoising, binarization, and resolution enhancement correct physical capture defects that cause OCR engines to misread or skip characters. Neural network models further improve accuracy by learning to recognize text patterns across fonts, handwriting styles, and degraded images.

What preprocessing filters does IronOCR support?

IronOCR supports deskewing, denoising, binarization, resolution enhancement, and several additional filters through the OcrInput API. You can chain multiple filters in a single pass and use the OcrInputFilterWizard to automatically discover the optimal filter combination for a given document type.

Which deep learning model powers IronOCR?

IronOCR is powered by Tesseract 5, which uses an LSTM (Long Short-Term Memory) neural network architecture. Combined with convolutional feature extraction, this CRNN model handles font variations, partial occlusions, and image degradation more effectively than traditional template-based OCR systems.

How do you perform region-based OCR with IronOCR?

Use the AddRegion method on OcrInput with a CropRectangle defining the x, y, width, and height of the target area in pixels. IronOCR then restricts recognition to that zone, improving both speed and accuracy for structured documents such as forms and invoices.

Can IronOCR generate searchable PDFs from scanned documents?

Yes. After calling Read on an OcrInput, call SaveAsSearchablePdf on the OcrResult object. This produces a PDF that embeds the recognized text as a hidden layer beneath the original scan image, enabling full-text search while preserving the visual appearance of the document.

How many languages does IronOCR support?

IronOCR supports over 125 languages. Each language is backed by a dedicated Tesseract LSTM model distributed as a NuGet package. You can specify multiple languages simultaneously for documents that mix two or more languages.

What order should preprocessing filters be applied in?

As a general rule, apply deskewing first so subsequent filters work on properly aligned images. Follow with denoising before binarization to prevent artifacts from being permanently encoded into the black-and-white conversion. Apply resolution enhancement early if the source is low-resolution, as upscaling before denoising avoids amplifying compression artifacts.

How do confidence scores work in IronOCR?

IronOCR returns a confidence score between 0 and 1 for each recognized word in the OcrResult. A score of 0.85 or higher is generally considered reliable for business documents. Words below your chosen threshold can be flagged for manual review or routed to a secondary recognition pass.

Kannaopat Udonpant
Software Engineer
Before becoming a Software Engineer, Kannapat completed a Environmental Resources PhD from Hokkaido University in Japan. While pursuing his degree, Kannapat also became a member of the Vehicle Robotics Laboratory, which is part of the Department of Bioproduction Engineering. In 2022, he leveraged his C# skills to join Iron Software's engineering ...
Read More

Iron Support Team

We're online 24 hours, 5 days a week.
Chat
Email
Call Me