Avoiding Costly Errors: How Lab Error Study Can Improve NIR Results
Kirkpatrick, V. (2023).NIR: Avoiding Costly Errors: How Lab Error Study Can Improve NIR Results. [digital image].

Avoiding Costly Errors: How Lab Error Study Can Improve NIR Results

Near-infrared (#NIR) #spectroscopy is now commonly used in industries such as #food, #agriculture, and #pharmaceuticals to rapidly and precisely examine samples. However, the #accuracy and dependability of NIR results are crucial. This is where a #laberror study comes in handy. Lab Error refers to any mistake or error that occurs during laboratory testing or analysis of a sample. With a thorough understanding of lab errors, #NIRcalibrations can provide dependable and accurate results.

This article will examine the importance of Lab Error studies in NIR analysis, how to determine lab errors, how lab error studies helped improve NIR testing procedures to achieve optimal results, and how quality managers can overcome lab errors and challenges associated with the Lab Error study.

Sources of Lab Error in NIR Analysis

So, how do lab errors occur? Lab errors can occur due to various reasons during the analysis. These reasons include human mistakes during testing or sample preparation and incomplete analysis, interpretation, or data calculation.

One possible reason for inaccurate labeling could be a mislabeled sample. Another reason could be if the operator needed to follow the correct procedure, which could result in incorrectly completing a necessary test.

Instrument or method variability is another potential source of lab errors resulting in slightly different readings across different instruments or methods, even when calibrated and used correctly. These inconsistencies may occur due to design, the signal-to-noise ratio of the instrument, repeatability and manufacturing changes, or improper procedures, leading to inconsistent readings between different spectrometers.

Environmental factors like temperature and humidity affect lab measurements' accuracy and Precision. These factors can make instruments behave differently or change samples' physical properties, leading to measurement errors.

Identifying and quantifying the sources of lab errors can help us better understand a particular method's limitations. We can then minimize these errors by improving training, standardizing procedures to reduce human error, regularly calibrating and maintaining instruments to reduce variability, and controlling environmental conditions to minimize their impact on measurements.

Lab Precision: Understanding the Reproducibility of Laboratory Measurements

Ensuring the accuracy and reliability of NIR spectroscopy measurements is critical in food quality control. Lab Precision refers to the consistency of results obtained by measuring the same sample multiple times using the same method and equipment.

The accuracy and dependability of NIR spectroscopy measurements are vital in food quality control, and achieving this requires ensuring the consistency and reliability of results. The Lab Precision, also known as the standard error of the lab (SEL), plays a significant role in achieving accuracy and consistency in the results.

The SEL measures the precision of a laboratory's measurement process, similar to the Coefficient of Variation (CV%). However, the SEL is calculated based on fewer replicates (usually 5-10) than CV% (usually 20-30). Additionally, the SEL is based on the standard deviation of test results from a single operator or instrument in a single laboratory. In contrast, the CV% is based on the standard deviation of test results across multiple operators or instruments and multiple laboratories. This makes CV% a useful metric for evaluating the consistency and reliability of measurements across different settings.

Both SEL and CV% are important tools in assessing the accuracy and reliability of NIR spectroscopy measurements, and the choice between them depends on the specific context and objectives of the analysis.

To calculate the Lab Precision, we must collect ten samples from production at different times to ensure sample variation. The samples should be divided into two sets and sent to the laboratory on different days, labeled internally for each sample to remove potential bias.

Then, lab personnel should analyze samples following the standard operating procedures (SOPs). The analysis should be repeated several times to calculate the SEL, typically 7-10 times. Unlike the CV%, the SEL is unaffected by the data's mean.

The SEL formula is:

  • SEL = sqrt [ sum (D)^2 / n ].

When we have only ten measurements with ten samples in duplicates, it is impossible to calculate the CV%, and the SEL must be calculated instead. It represents the sample means' standard deviation and estimates the population means' variability.

Let's see how to calculate the lab precision for the crude protein content of a wheat sample. If the analysis of a wheat sample for crude protein at 8.0% was repeated seven times, resulting in measurements of 7.6%, 7.8%, 8.2%, 7.6%, 7.8%, 7.7%, and 7.8%, the mean value could be calculated. The mean value is determined to be 7.8%.

To compute the Standard Error of the Mean (SEL) using the formula SEL = sqrt [ sum (D)^2 / n ], we need to calculate the sum of squared differences (D) and divide it by the number of measurements (n). In this case, the sum of squared differences is sum (D)^2 = 0.25, and the number of measurements is 7. In this scenario, the calculation becomes SEL = sqrt [ sum (D)^2 / n ] = sqrt [ 0.25 / 7 ] = 0.134.

The SEL formula assumes that the measurements are independent and that the variation in the measurements is exclusively attributable to random error. Using the SEL formula is crucial when estimating the Uncertainty associated with a laboratory measurement method, as it considers an additional factor that the standard error of the mean (SEM) formula does not consider. Although the SEM formula may be available in some literature [1], it's best to use the SEL formula when estimating Uncertainty.

The Lab Precision can vary depending on the testing method, instrument, and calibration, so it is crucial to calculate Lab Precision for each method used to account for all potential sources of error. This helps ensure accurate and reliable results. However, it is also important to estimate and report the Uncertainty associated with the measurement, considering all sources of error in the measurement process.

A low SEL indicates consistent and reliable measurements with good Lab Precision, while a significant standard deviation suggests inconsistent measurements and poor Lab Precision. To account for all potential sources of error, it is crucial to calculate Lab Precision for each method used in the lab and estimate and report the uncertainty associated with measurement.

Uncertainty refers to the potential range of values a measurement can take, considering factors such as sample variability, instrument variation, and operator error. Quality managers can estimate uncertainty by analyzing the accuracy and precision of measurements and considering other sources of error.

The ISO Guide [2] to the Expression of Uncertainty in Measurement (GUM) provides appropriate methods to estimate the overall uncertainty in the measurement process. The GUM defines measurement uncertainty as "a parameter associated with the result of a measurement that characterizes the dispersion of values that could reasonably be attributed to the measurand" [3]. Accounting for uncertainty in the calculation of Lab Precision can help us better understand the limitations and potential sources of error in our measurements and make more informed decisions based on the data obtained.

Although both Lab Precision and Uncertainty rely on the standard deviation of measurements, they use this value differently. Lab Precision measures the consistency of measurements, whereas Uncertainty measures the potential range of values where the true value could lie. Furthermore, Lab Precision is expressed as a percentage of the mean value, while Uncertainty is expressed in the same units as the measurement.

Lab Accuracy: Evaluating the Correctness of Laboratory Measurements

Lab Accuracy is another important aspect of food quality control using NIR spectroscopy, which differs from the Lab Precision discussed earlier. Lab Accuracy determines how close the measured value is to the true value established by the AOAC.

The AOAC (Association of Official Agricultural Chemists) is a reliable organization that establishes the true value of lab accuracy in food quality control [3]. AOAC offers the Performance Tested Methods (PTM) program, which provides certified methods for testing food quality [4]. This ensures the accuracy of the results obtained by the testing methods used in food quality control.

Source: http://climatica.org.uk/climate-science-information/uncertainty

When assessing Lab Accuracy, there are various approaches to calculating it depending on whether a known reference value is available. One commonly used formula to measure the percent difference is as follows:

  • Lab Accuracy = (Measured Value / True Value) * 100%

  • Measured value is the value obtained from the method or instrument

  • True value is the value obtained from the reference method according to the AOAC-approved method

Let's look at the example I provided in the Lab Precision study, where our Mean value is calculated to be 7.8%, and we know that the value of the wheat sample from AOAC is 8.0%. Substituting the formula with numbers, we calculate Lab Accuracy = (7.8 / 8.0) * 100% of the measured method or instrument to be 97.5%.

Measuring accuracy can be quite intricate when the true value remains uncertain. Quality managers tackle this challenge by utilizing various strategies to estimate the true value. One approach involves comparing measurements obtained from different methods or instruments. They also turn to trusted organizations like the National Institute of Standards and Technology (NIST) [5] and the International Organization for Standardization (ISO) [6], who provide valuable data on materials, including their properties and values.

Accuracy requires a true value to compare the measured value. In situations where the true value is unknown, other metrics like precision or repeatability can be used to evaluate the consistency and reliability of the measurements. Even when uncertainty looms, these approaches provide a path to navigate and achieve reliable and precise results.

Lab Error: Comprehensive Assessment of the Reliability of Laboratory Measurements

Lab Error is a comprehensive assessment of the reliability of laboratory measurements, and one of the key parameters used in this assessment is the Standard Error of Prediction (SEP). The SEP is calculated using the Lab Precision value, SEL. Another metric employed to evaluate the trustworthiness of laboratory measurements is the Coefficient of Variation (CV%). This parameter gauges the precision of the measurements to the mean value.

In our example, the mean value is 7.80, and the Lab Precision value was previously calculated as SEL = 0.134 (rounded to three decimal places).

Once the Lab Precision value is determined, the minimum and maximum standard errors of prediction (SEP) can be calculated as follows:

  • Lower SEP = 1.5 x SEL

  • Upper SEP = 2 x SEL

A multiplier of 2 accounts for the maximum expected error, while a multiplier of 1.5 represents the minimum expected error. These multipliers help us grasp the potential range of errors in our laboratory measurements, considering the inherent variation.

In the provided example, the Lower SEP is calculated to be 0.200, indicating the lower end of the expected range. On the other hand, the Upper SEP is determined to be 0.267 (rounded to three decimal places), representing the upper end of the anticipated range. These values shed light on the likely span within which approximately 68% and 95% of future measurements are projected to fall, respectively.

The Lab Precision, CV%, Lower SEP, and Upper SEP values provide a comprehensive assessment of the reliability of laboratory measurements, allowing quality lab managers to determine the accuracy and precision of their data and to assess the validity of their experimental procedures.

The CV% can be calculated as follows:

  • CV% = (SD / mean value) x 100%

To calculate this dataset's coefficient of variation (CV%), we first need to calculate the measurements' standard deviation (SD).

In the previous example, we determined the mean value to be 7.8%.

Next, we calculate the variance by summing the squared differences of each measurement from the mean and dividing it by (7-1). In this case, the variance is calculated as 0.05333.

To calculate SD, we use the following formula:

  • SD = sqrt(variance) = sqrt(0.053333) = 0.2309

Now we can calculate CV%:

  • CV% = (SD / mean) x 100% = (0.2309 / 7.8) x 100% = 2.6%

In this example, the CV% of 2.6% indicates that the method has a good level of precision, and it is generally accepted that a CV% below 5% is acceptable.

By measuring Lab Error, we can identify any test method or equipment issues and make the necessary adjustments to ensure accurate and reliable results. However, it is important to note that SEL is not the same as CV% as it represents a different aspect of the data. SEL estimates how close the sample mean is to the true population mean, while CV% measures the measurements' variability.

It's important to recognize that Lab Error can fluctuate based on the testing approach, instruments utilized, and their calibration. As a result, it's essential to compute the Lab Error for each technique employed in the laboratory to address all possible sources of error.

Knowing the Lab Error helps laboratory personnel improve processes and guarantee dependable and credible results. A lower Lab Error and CV% indicate that the lab's results are more likely to be accurate and precise. Therefore, measuring and monitoring lab errors regularly is essential to ensure the testing process's quality.

How Lab Error Affects NIR Results

Lab Error is a critical factor that can significantly affect the accuracy and reliability of NIR analysis results. Conducting a Lab Error study helps determine the size of the Lab Error, which measures the laboratory's testing process's accuracy and Precision. This value can then be used as a reference to assess the accuracy and Precision of NIR results.

Industry standards suggest that the difference between NIR results and reference method results should fall within 1.5 to 2 times the Lab Error.

For example, the external lab sent us the primary method results with a true or reference value of 8%. The SEL for the lab we calculated previously is = 0.134. We measured 7.85% on our NIR machine. We check the acceptable range, which should be between 1.5 X 0.134 and 2 X 0.134. We use the following formula to determine the acceptable range of our NIR result:

  • Acceptable Range = Reference Value ± Acceptable Difference

Plugin-in the values we get:

  • Acceptable Range = 8 ± (1.5 X 0.134) to 8 ± (2 X 0.134) = 8 ± 0.201 to 8 ± 0.268 = 7.80 to 8.27 (rounded to two decimals)

Now, let's consider the measured value on the NIR machine, which is 7.85%. Since this value falls within the acceptable range of 7.80 to 8.27, we can confidently conclude that the result obtained from the NIR machine aligns with the accuracy and precision standards set by the industry.

If the measured NIR result falls outside the acceptable range of 7.80 to 8.27, it will prompt significant concerns regarding the accuracy and precision of the employed NIR method for sample measurement. In such instances, conducting a thorough investigation and implementing appropriate corrective actions to identify the error's underlying cause and improve the NIR method's overall accuracy and precision becomes crucial.

It's worth noting that NIR results may only sometimes match reference method results precisely. This discrepancy arises because NIR and reference methods operate based on distinct principles and may be influenced differently by various factors. However, the acceptable difference range provides a valuable guideline for evaluating the accuracy and reliability of NIR results and can help identify areas for improvement in the testing process.

To ensure the accuracy and reliability of NIR technology in a specific application, it is crucial to understand the connection between Lab Error and NIR results. Verifying that the NIR results fall within an acceptable range relative to the Lab Error is vital for making informed decisions based on the obtained results. This verification process can boost our confidence in the results, thereby validating the use of NIR technology.

Note on NIR Accuracy:

In this article, we discuss how to find out the Lab Error and how it can affect NIR results. But figuring out the accuracy of NIR is a complicated topic and needs more explanation. In upcoming articles, we will discuss the different ways and tools to determine NIR Accuracy, like RSQ, SEC, SECV, CV%, 1-VR, and other accuracy measures. Keep an eye out for our future talks on NIR Accuracy.

Lab Error Case Studies in Agriculture and Food Manufacturing

Several factors, like sample preparation, instrument calibration, and operator error, can influence the accuracy and reliability of NIR analysis. Many industries have conducted Lab Error studies to enhance the testing processes and reduce errors in NIR analysis.

Lab Error studies involve deliberately introducing errors into the testing process and then measuring the impact of those errors on the results, which allows industries to identify potential sources of error and develop strategies to mitigate or eliminate them.

For instance, in the agricultural industry, a Lab Error study showed that sample heterogeneity could result in substantial measurement errors when determining the protein content of soybeans. Consequently, the industry developed a new sample preparation method to minimize contamination and improve accuracy.

  • Naeve, S.L., Proulx, R.A., Hulke, B.S. and O'Neill, T.A. (2008), Sample Size and Heterogeneity Effects on the Analysis of Whole Soybean Seed Using Near Infrared Spectroscopy. Agron. J., 100: 231-234. https://doi.org/10.2134/agronj2007.0230

In the food manufacturing industry, instrument drift was found to cause significant errors in snack moisture content measurement. Thus, the industry implemented a new calibration schedule to reduce errors.

  • Lim, C K, & Norris, K H (1995). The effect of instrument drift on near-infrared reflectance measurement of moisture in snack foods. Journal of Food Science, 60(4): 758-761.

  • Rudnitskaya A (2018) Calibration Update and Drift Correction for Electronic Noses and Tongues. Front. Chem. 6:433. https://doi.org/10.3389/fchem.2018.00433

  • Xiao XU, Lijuan XIE, Yibin YING. Factors influencing near infrared spectroscopy analysis of agro-products: a review. Front. Agr. Sci. Eng., 2019, 6(2): 105‒115 https://doi.org/10.15302/J-FASE-2019255

In the dairy industry, milk samples stored above 4°C were found to cause errors in measuring fat content. A new sample storage protocol was introduced to maintain samples at 4°C or below, enhancing accuracy.

  • Cao, W., Zhang, Y., Guo, X., Li H., & Chen, Y (2020). A study on sample temperature's influence on near-infrared fat milk content analysis. Journal of Dairy Science, 103(1), 591-598.

  • Lukáš Dvořák, Martin Fajman, Kvetoslava Sustova, Influence of Sample Temperature for Measurement Accuracy with FT-NIR Spectroscopy, Journal of AOAC INTERNATIONAL, Volume 100, Issue 2, 1 March 2017, Pages 499–502, https://doi.org/10.5740/jaoacint.16-0264

Another study examined how an inhomogeneous sample affects how accurately NIR analysis measures fruit puree sugar. This study found that inhomogeneous samples can contribute to measurement error if incorrectly mixed. Based on the study, the food industry created procedures to provide instructions on properly mixing samples. This makes the NIR analysis more accurate and dependable.

  • Ou C., Yu X., Wang X., Yang Y., Liu J., & Lu J. (2018). Impact of sample homogenization on near-infrared analysis of sugar content in fruit puree. Journal of Food Science and Technology, 55(9), 3584-3592.

  • Naeve, S. L., Proulx, R. A., Hulke, B. S., & O'Neill, T. A. (2008). Sample size and heterogeneity effects on the analysis of whole soybean seed using near infrared spectroscopy. Agronomy Journal, 100(1), 231-234. https://doi.org/10.2134/agronj2007.0230

Similarly, the wheat industry conducted a Lab Error study to evaluate the impact of particle size on NIR analysis of protein content. The study revealed that particle size plays a significant role in the accuracy of NIR analysis and requires adjusting calibration models accordingly. The industry responded by developing new calibration models considering particle size variation, leading to a more accurate measurement of wheat protein. 

Lab Error studies have played a significant role in helping industries improve their testing processes and reduce errors in NIR analysis. By identifying potential sources of error and developing strategies to mitigate them, industries can improve the accuracy and reliability of NIR analysis, leading to better quality products and improved customer satisfaction.

The Role of Statistical Analysis in Ensuring Accurate Results for NIR Analysis in the Lab

When conducting NIR testing in the lab, it's essential to highlight the statistical methods utilized to ensure accurate and dependable results. Statistical analysis is vital for researchers and food quality lab managers in identifying patterns and detecting potential error sources in NIR testing. This enables them to develop improved strategies to enhance the final results.

In the realm of NIR analysis within food and agriculture, the concept of outliers comes into play. Outliers represent data points significantly deviating from most samples within a dataset. Such deviations may arise due to measurement errors, sample contamination, or biological variations.

Once these outliers are pinpointed in NIR data through statistical techniques like principal component analysis, Global-H and Neighbourhood-H, Mahalanobis distance, and cluster analysis, it's possible to mitigate their impact using data pre-processing methods such as spectral smoothing, baseline correction, or normalization. If persistent outliers are identified, indicating the absence of a sample in the database, a new calibration must be developed. This typically necessitates a minimum of 30-50 samples for calibration creation and an additional 10 samples for validation.

Bias is another critical consideration, particularly when investigating lab errors due to varying primary methods (e.g., Kjeldahl or Dumas methods for Protein), environmental changes, or shifts between different crops. Bias refers to a consistent discrepancy in a particular direction (e.g., NIR result being 1% higher for Protein and 0.5% lower for fat than the lab result) or a deviation from the true value (Lab Result - NIR Result = Bias Difference). Addressing or mitigating bias is pivotal for obtaining accurate and dependable results. Correcting bias can significantly enhance the precision of NIR analysis. Regularly checking for bias using a minimum of ten samples quarterly and whenever a new crop or formulation is essential. Employing statistical analysis allows the identification of bias by comparing measured values against true values.

Statistical analysis is crucial in achieving precision and dependability in NIR analysis. It empowers researchers to identify and tackle error sources, ultimately elevating the data quality generated. As such, it's an indispensable asset for anyone engaged in NIR analysis within their research endeavors.

Conclusion: Overcoming Lab Error Studies Challenges

With the appropriate resources, instruments, and Standard Operating Procedures (SOPs), food processors and researchers can effectively overcome challenges related to Lab Error studies and ensure the reliability and validity of their results.

To assess overall lab quality, lab quality managers can adhere to these guidelines: An excellent range for accuracy is 97.5-102.5% recovery, very good falls within 95.0-97.5% or 102.5-105.0% recovery, and good falls between those ranges. For precision, an excellent range is 0-2.5% coefficient of variation (CV), very good falls within 2.5-5.0% CV, and good is within 5.0-7.5% CV.

Furthermore, operators must undergo comprehensive training and periodic evaluation to ensure vigilance against potential human errors while gathering, documenting, and interpreting data. Testers should employ double-checking procedures to mitigate human errors, establish comprehensive SOPs, clearly label samples, and harness process automation technology. Also, using multiple methods to verify accuracy and correct errors is best.

Factory standardization serves as a powerful tool for reducing instrument variability. This involves setting parameters that all instruments must meet for certification, such as specific wavelength calibration, maximum tolerance for reading variations, and flawless instrument construction.

Factory standardization also entails regular instrument testing to ensure consistent and accurate readings. Using a single supplier for all instruments further guarantees uniform construction and results.

Despite constraints such as time, cost, resource availability, and limited reference methods, the benefits of integrating a Lab Error study into NIR validation far outweigh the challenges. By identifying and quantifying error sources in the reference method, the Lab Error study significantly enhances the accuracy and precision of the NIR method.

Conducting a Lab Error study emerges as an indispensable step in validating the utilization of NIR technology. It empowers understanding of the sources of measurement errors and facilitates informed decisions about result accuracy and dependability.


For more information about NIR technology and how it can be used in food analysis, check out my previous articles:


#LabErrorStudy #NIRAnalysis #FoodQualityControl #LabPrecision #LabAccuracy #ReliableResults #DataQuality #InstrumentStandardization #ErrorQuantification #NIRValidation #LabQualityManagement #PrecisionandAccuracy #MeasurementErrors #QualityAssurance #LabTesting #DataIntegrity #InstrumentCalibration #LabMethods #ErrorReduction #ValidationProcess #FoodTechnology #AgricultureInnovation #QualityTesting #FoodSafety #FeedQuality #GrainQuality #AgriculturalResearch #PrecisionFarming #SustainableAgriculture #QualityAssessment #FoodIndustry #AgTech #AgriculturalTechnology


About Me: With 8 years of experience in the Food and Agriculture industry, I've seen firsthand the importance of accurate Near-Infrared (NIR) analysis in daily operations. Every newsletter is carefully researched, with hours of personal time spent on preparation. I pay meticulous attention to detail in every piece I write. The thoughts I share in each article are my own, reflecting my perspectives and insights. These thoughts do not represent the company's viewpoints.

I want to invite you to engage and participate in the conversation. If you find the articles informative and thought-provoking, please consider liking and reposting them. Your engagement and support help spread knowledge and drive discussions within our community.

For quality managers, ensuring the accuracy of NIR analysis is a significant challenge that can directly impact the reliability of results.

Connect with Me: I'm committed to sharing insights and knowledge gained from my experience in the field. Feel free to connect with me on LinkedIn to continue the conversation and stay updated on the latest food quality control and NIR analysis developments.

Learn More: FOSS acknowledges the significance of Lab Error investigations and provides specialized teams to support customers in obtaining precise outcomes using NIR. Utilizing FOSS NIR technology coupled with data analytics ensures the quality and security of food items.

For more information on how FOSS can help with your food quality testing needs and challenges, please visit FOSS Analytics or message me on LinkedIn to discuss.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics