Tiroler Berglandschaft
QualityJanuary 18, 20266 min

Computer vision in quality control: What actually works in practice

Computer vision quality control sounds promising but has pitfalls. Four use cases, honest limitations, and what you actually need to run automated quality control in manufacturing.

Computer vision quality control comes up in almost every conversation I have with manufacturing companies. And for good reason: visual inspection tasks are repetitive, tiring, and error-prone when a person has to examine parts for eight hours straight. The error rate climbs with every hour, especially on monotonous tasks.

But between the idea and a working system, there are hurdles that rarely get discussed. In this post, I describe four concrete use cases for AI quality inspection, explain what hardware and setup you actually need, and say honestly where computer vision hits its limits.

What computer vision quality inspection actually means in practice

At its core, you point a camera at a part, capture an image, and evaluate that image automatically. Sounds simple. Sometimes it is - but the details make or break it.

A typical automated quality control manufacturing setup consists of three components: a camera with the right lens, controlled lighting, and a computing unit that evaluates the image. The evaluation can be rule-based (classical image processing with defined thresholds) or AI-based (a trained model that distinguishes good from bad).

In practice, I find that the AI-based approach makes sense where defects are variable - meaning they do not always look the same. A scratch can be thin or wide, straight or curved. A rule-based approach needs a separate rule for each variation. A trained model learns the pattern from examples.

Use case 1: Surface inspection

Surface inspection AI is probably the most common entry point. It is about detecting scratches, dents, discoloration, inclusions, or other visual defects on the part surface.

How it works: a camera captures an image of the surface. The AI model compares this image against its trained knowledge of what a good surface looks like and flags deviations.

What sounds simple has one critical bottleneck: lighting. I cannot emphasize this enough. Lighting is more important than the camera and more important than the AI model in surface inspection. A scratch that is clearly visible under angled light disappears completely under diffuse illumination. Conversely, angled light on a slightly textured surface creates shadows that look like defects.

In practice, this means: before I even think about an AI model, I test different lighting scenarios. Ring light, dome light, angled light, backlight - depending on the material and defect type. This step sometimes takes longer than the actual model training.

For metallic surfaces, which are common in DACH manufacturing, reflective materials pose a particular challenge. A polished aluminum surface reflects the light source directly into the camera, making the image unusable. This requires polarized lighting or specialized diffusers, and even then the result is not always perfect.

Use case 2: Completeness and assembly checks

The second major use case: verifying that all parts are present and correctly assembled. This applies to assemblies where screws, clips, seals, or connectors must sit at defined positions.

Technically, this is often simpler than surface inspection because the question is binary: part present or not. The camera captures an image of the assembly, and the model checks each relevant position.

The challenge comes with variants. When a manufacturer produces twenty product variants on the same line and each variant has a different configuration, the system needs to know which variant is currently being inspected. This requires either data exchange with the MES or automatic variant recognition. In practice, this is often handled through a barcode or a digital trigger from the PLC.

I see the same mistake repeatedly: companies want to cover all variants immediately. My advice is to start with the most common variant and expand the system step by step. A system that reliably inspects one variant is more valuable than one that half-inspects twenty.

Use case 3: Dimensional accuracy and measurement

Computer vision can also be used for dimensional checks - measuring lengths, distances, angles, or radii directly from the camera image.

I have to be honest here: this works well for relative measurements and for tolerances in the range of tenths of a millimeter. For micrometer accuracy or complex 3D geometries, a camera system does not replace a coordinate measuring machine (CMM).

The advantage of camera-based measurement lies in speed and integration. Inline, meaning directly in the production line, I can inspect every part instead of just samples. If a dimension slowly drifts - indicating tool wear - I see the trend early and can react before scrap occurs.

Offline measurements with dedicated measuring systems remain essential where highest precision or complex geometries are required. Computer vision does not replace the CMM, but it complements it: one hundred percent inline inspection with moderate accuracy plus spot checks on the CMM with high accuracy.

Use case 4: Documentation and traceability

This use case is often underestimated but is enormously valuable in many industries. Every produced part is photographed and the image is linked to a unique serial number or batch number.

If a customer files a complaint six months later, I can pull up the image from the time of production and check whether the defect was already present at shipment. For automotive suppliers in the DACH region who are under enormous documentation pressure, this is a strong argument.

Technically, this is the simplest of the four use cases. One camera, one trigger signal, one database. The challenge lies more in data volume - if you photograph ten thousand parts per day, you need a well-designed storage concept and a clear deletion policy.

Hardware reality: what you actually need

A typical setup for automated quality control manufacturing consists of:

  • Industrial camera: Area scan or line scan camera, depending on the application. Resolution between two and twenty megapixels. For most applications, a mid-range camera is sufficient.
  • Lens: Matched to the working distance and field of view. Saving money here backfires - a poor lens makes any good camera worse.
  • Lighting: As described above, the most critical factor. LED lighting with defined geometry and stable brightness.
  • Edge PC or industrial PC: For on-site image evaluation. A modern edge PC with a GPU can run most vision models in real time.
  • Mechanical integration: Mounts, shielding from ambient light, possibly an enclosure. This part is often forgotten in quotes but costs time and money.

Integration into the existing line is often the most labor-intensive part. Where does the camera get mounted? How is the trigger signal generated? How does the result communicate with the PLC? These questions sound trivial but can easily consume several days of work.

Honest limitations

Computer vision in quality control has clear limits. Knowing them saves disappointment and money:

  • Complex 3D geometries: A 2D camera only sees what it sees. Undercuts, bore depths, or internal geometries remain invisible. 3D vision systems help partially but are significantly more expensive and complex.
  • Variable lighting: If ambient lighting fluctuates - from daylight, changing hall lighting, or reflections - image quality suffers. Controlled, shielded lighting is mandatory.
  • Transparent and highly reflective materials: Glass, polished metal, films - these materials are difficult for camera systems. It works, but the effort for lighting and image preprocessing is considerably higher.
  • Rare defects: If a defect type occurs only once per ten thousand parts, training images are scarce. Anomaly detection can help but is less reliable than supervised learning with many examples.
  • Contamination and wear: Cameras and lighting in production environments get dirty. Cutting fluids, dust, vibrations - all of this requires regular maintenance that must be planned for.

How to get started

My advice for manufacturing companies considering computer vision quality control:

  • Pick one inspection point: Not the entire plant at once. One station, one defect type, one clear success criterion.
  • Lighting first: Before anyone talks about AI models, sort out the lighting situation. A good image with the right lighting is half the battle.
  • Collect training images: At least a few hundred images of good parts and - where available - of defects. The more variability in the images, the more robust the eventual model.
  • Start small, then scale: A working proof of concept at one station delivers the arguments for the next step.

Computer vision in quality control is not a silver bullet. But for the right use cases - repetitive visual inspections under controlled conditions - it is one of the most effective tools available to manufacturing companies today. Anyone who approaches it realistically and starts with a focused pilot project can achieve solid results within a few weeks.