Jump to content
 

Search the Community

Showing results for tags 'adc'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Top Level
    • Announcements
    • The Lounge
    • Questions and Answers
    • Forum Help
    • Project Gallery
    • Vendor Bug Reports

Blogs

  • What every embedded programmer should know about ...
  • Programming Lore
  • Toe-to-toe

Categories

  • Files

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 1 result

  1. What every embedded programmer should know about ADC measurement, accuracy and sources of error ADC's you encounter will typically be specified as 8, 10 or 12-bit. This is however rarely the accuracy that you should expect from your ADC. It seems counter-intuitive at first, but once you understand what goes on under the hood this will be much clearer. What I am going to do today is take a simple evaluation board for the PIC18F47K40 (MPLAB® Xpress PIC18F47K40 Evaluation Board ) and determine empirically (through experiments and actual measurements) just how accurate you should expect ADC measurements to be. Feel free to skip ahead to a specific section if you already know the basics. Here is a summary of what we will cover with links for the cheaters. Units of Measurement of Errors Measurement Setup Sources and Magnitude of Errors Voltage Reference Noise Offset Gain Error Missing Codes and DNL Integral Nonlinearity (INL) Sampling Error Adding up Errors Vendor Comparison Final Notes Units of Measurement of Errors When we talk about ADC's you will often see the term LSB used. This term refers to the voltage represented by the least significant bit of the ADC, so in reality it is the voltage for which you should read 1 on the ADC. This is a convenient measure for ADC's since the reference voltage is often not fixed and the size of 1 LSB in volts will depend on what you have the reference at, while most errors caused by the transfer function will scale with the reference. For a 10-bit ADC with 3v3 of range one LSB will be 3.3/(2^10) = 3.3/1024 = 3.22mV. An error of 1% on a 10-bit converter would represent 1%*1024 = 10.24x the size of one LSB, so we will refer to this as 10LSB of error, which means our measurement could be off by 32.2mV or ten times the size of 1 LSB. When I have 10 LSB in error I really should be rounding my results to the nearest 10 LSB, since the least significant bits of my measurement will be corrupted by this error. 10LSB will take 3.32 bits to represent. This means that my lowest 3 bits are possibly incorrect and I can only be confident in the values represented by the 7 most significant bits of my result. This means that the effective number of bits (ENOB) for my system is only 7, even though my ADC is taking a 10-bit measurement. The lower 3 bits are affected by the measurement error and cannot be relied upon so they should be discarded if I am trying to make an absolute voltage measurement accurately. We can always work out exactly how many bits of accuracy we are losing, or to how many bits we need to round to, using the calculation: log(#LSB error)/log(2) Note that this calculation will give us fractional numbers of bits. If we have 10LSB error the error does not quite affect a full 4 bits (that happens only at 16LSB), but we can not say it removes only 3 bits, because that already happened at 8LSB, so this is somewhere in between. In order to compare errors meaningfully we will work with fractions of bits in these cases, so 10LSB of error reduces our accuracy by 3.32 bits. This is especially useful when errors are additive because we can add up all the fractional LSB of errors to get the total error to the nearest bit. At this point I would like to encourage you to take your oscilloscope and try to measure how much noise you can detect on your lines. You will probably be surprized that most desk oscilloscopes can only measure signals down to 20mV, which means that 1LSB on a 10-bit ADC with a 3V3 reference will be close to 10x smaller than the smallest signal your digital scope can measure! If you can see noise on the scope (which you probably can) then that means it is probably at least 20mV or 10LSB of error. It turns out that our intuition about how accurate an ADC should be, as well as how accurate our scope can measure is seldom correct ... Measurement Setup I am using my trusty Saleae Logic Pro 8 today. It has a 12-bit ADC and measures +-10V on the analog channel and is calibrated to be accurate to between 9 and 10 ENOB of absolute accuracy. This means that 1LSB of error will be roughly 4.8mV, which for my 2V system with a 10-bit ADC is already the size of 2LSB of measurement error. When I ground the Saleae input and take a measurement we can see how much noise to expect on the input during our measurements. As you will see later we actually want to see 2-3LSB of noise so that we can improve accuracy by digital filtering, if you do not have enough noise this is not possible, so this looks really good. Using the software to find the maximum variation for me you can see that I have about 15.64mV of noise on my line. Since the range is +-10V this is only 15.6/20000 = 0.08% of error, but this is going to be, for my target 2V range, 15.6/2048*1024 = 8LSB of error to start with on my measurement equipment! For an experiment we are going to need an analog voltage source to measure using the ADC. It so happens that this device has a DAC, so why not just use that! You would think that this was a no-brainer, but it turns out, as always, that it is not quite as simple as that would seem! What I will do first is set the DAC and ADC to use the same reference (this has the added benefit that Vref inaccuracy will be cancelled out, nice!).We expect that if we set the DAC to give us 1.024V (50% of full range) and we then measure this using the 10-bit ADC, that we would measure half of the ADC range, or 512 right? For the test I made a simple program that will just measure the ADC every 1 second and print the result to the UART. Well here is the result of the measurement (to the right). Not what you expected ?! Not only are the 1st two readings appallingly bad, but the average seems to be 717, which is a full 40% more than we expect! How is this possible? Well this is how. Not only is the ADC inaccurate here, but the DAC even more so! The DAC is only 5 bits and it is specified to be accurate to 5LSB. That is already a full 320mV of error, but that is still not nearly enough to explain why we are measuring 717/1024*2.048 = 1.434V instead of 1.024V... So what is really going on here? To see I connected my trusty Saleae and changed the application to cycle the DAC though all 32 values, 1s per value, and make a plot for us to look at. On the Saleae we see this. It turns out that the DAC is such a weak source that anything you connect to it's output (like even an ADC input leakage or simply an I/O pin with nothing connected to it!), will load down the DAC and skew the output! This has been the cause of consternation for many a soul actually (see e.g. this post on the Microchip forum) Wow, so that makes sense, but is there anything we can do about this? On this device unfortunately there is not much we can do. There are devices with on-board op-amps you can use to buffer the DAC output like the 16F170x family, but this device does not have op-amps so we are out of luck! I will blog all about DAC's and about what the reasons for this shape is on another occasion, this blog is about ADC after all! So all I am going to do is adjust the DAC setting to give us about the voltage we need by measuring the result using the Saleae and call it a day. Turns out I needed to subtract 6 from the output setting to get close. We now see a measurement of 520 and this is what we see while taking measurements with the Saleae. 10.37mV of noise on just about 1V and we are in business! Sources and Magnitude of Error When measurement errors are uncorrellated it means that they will all add up to form the total worst case error. For example if I have 2LSB of noise, and I also have 2LSB of reference error this means the reading can be 2LSB removed from the correct value as a result of the reference, and an additional 2LSB as a result of the noise, to give 4LSB of total error. This means that these two types of errors are not correllated and the bits contributed by each to the total error are additive. At this point I want to mention that customers often come to me demanding a 16bit-ADC because they feel that the 10-bit one they have is not adequate for their application. They can seldom explain to me why they need 31uV of accuracy or what advanced layout techniques they are applying to keep the noise levels to even remotely this range, and most of the time the real problem turns out to be that their 10bit-ADC is really being used so badly that they are hardly getting 5 bits of accuracy from the converter. I also often see calculations which effectively discard the lower 4 bits of the ADC measurement, leaving you with only 8-bits of effective measurement, so if you do that getting more bits in the ADC is obviously only going to buy you only disappointment! That said, lets look at all of the most significant sources of error one by one in more detail. There are quite a few so we will give them headings and numbers. 1. Voltage Reference To get us started, lets look at the voltage reference and see how many LSB this contributes to our measurement error. If you are using a 1% reference, then please do not insist that you need a 16-bit or even a 12-bit ADC, because your reference alone is contributing errors into the 7th most significant bit and 8 bits is all you are going to get anyway! The datasheet for our evaluation board chip (PIC18F47K40) shows that the voltage reference will be accurate to +-4% when we set it to 2.048V like we did. People are always surprized when they realize how many LSB they are losing due to the voltage reference! 4% of 1024 = 51.2, which means that the reference alone can contribute up to 51.2 LSB of error to our system! Using an expensive off-chip reference also complicates things for us. Using that we would now have to be very careful with the layout to not introduce noise at the reference pin, and also take care of any signals coupling into this pin. Even then the reference will likely be something like a TL431 which will only be accurate to 1% which will be 10 LSB reducing our 10-bit ADC to less than 8 ENOB. We must note that reference errors are not equally distributed. At the maximum scale 1% represents 10LSB of error, but at the lower end of the scale 1% will represent only 1% of 1LSB. Since we are looking for the worst-case error we have to work with 10LSB due to the 1% error over the full ADC range. In your application you may be able to adjust the contribution of this error down to better represent the range you are expecting to measure. For example - at mid range, where our test signal is, the reference error will only contribute 5LSB of error with a 1% reference, or 25LSB for our 4% internal reference. The reference error is something which we could calibrate out if we knew what the error was and many manufacturers discard it stating simply that you should calibrate it out. Sadly these references tend to drift over time, temperature and supply voltages, so you usually cannot just calibrate it in the factory and compensate for the error in software and forget it. To revisit our 16-bit ADC scenario, if I want to measure accurately to 31uV (16 bits on a 2V reference) that reference would have to be accurate to 31uV/2V = 0.0015%. Let's look on Digikey for a price on a Voltage reference with the best specs we can find. Best candidate I can find is this one at $128.31 a piece, and even that gives me only 0.1% with up to 0.6ppm/C drift. This means from 0 to 100C I will have 0.006% of temp drift (2LSB) on top of the 0.1% tolerance (which is another 33LSB). Now to be fair if I am building a control system I am more interested in perturbations from a setpoint and a 16-bit ADC may be valuable even if my reference is off, because I am not trying to take an absolute measurement, but still maintaining noise levels below 30uV is more of a challenge than it sounds, especially if I am driving some power which adds noise to the equation. This is of course the difference between accuracy and resolution. Accuracy gives me the minimum absolute error while resolution gives me the smallest relative unit of measure. 2. Noise Noise is of course the problem we all expect. It can often be a pretty large contributor to your measurement errors, and digital circuits are known for producing lots of noise that will couple into your inputs, but as we will see noise is not all bad and can be essential if you want to improve the results through digital post processing. We have seen that every 2mV of noise will add 1LSB to the error on our system as we have a 2V reference, and 1024 steps of measurement. As you have now seen this 2mV is probably much smaller than we can measure with a typical oscilloscope, so we cannot be sure how much noise we really have if we simply look at it on our scope. For most systems the recommendation would be to place the microcontroller in lowest power sleep mode and avoid toggling any output pins during the sampling of the ADC measurement to get the measurement with the lowest noise level. A simple experiment will show how much noise we could be coupling into the measurement when an adjacent pin is being toggled. I updated our program from before to simply toggle the pin next to the ADC input constantly and measured with the Saleae to see what the effect is. On the left is the signal zoomed out and on the right is one of the transitions zoomed in so you can get a better look. That glitch on the measurement line is 150mV or 75 LSB of noise due to an adjacent pin toggling, and the dev board I have does not even have long traces which would have made this much worse! It seems like a good idea to filter all this noise using analog low-pass filters like filter capacitors, but this is not always wise. We can make small amounts of noise work to our advantage, as long as it is white noise which is uncorrellated with our signal and other errors. When we do post-processing like taking multiple samples and averaging the result we can potentially increase the overall accuracy of our measurement. Using this technique it is possible to increase the ENOB (effective number of bits) of your measurements by simply taking more samples and averaging them. Without getting too deep into the math there, if you oversample a signal by a factor of N you will improve the SNR by a factor of sqrt(N), which means oversampling 256 times and taking the average will result in an increase of 16x the SNR, which represents an additional 4 bits of resolution of the ADC. Of course this is where having uncorrellated white noise of at least +-1LSB is important. If you have no noise on your signal you would likely just sample the same value 256 times and the average would not add any improvement to the resolution. If you had white noise added to the signal however you would sample a variety of values with the average lying somewhere in between the LSB you can measure, and the ratio of difference would represent the value of the signal more accurately. For a detailed discussion on this topic you can take a look at this application note by Silicon Labs https://www.silabs.com/documents/public/application-notes/an118.pdf 3. Offset The internal circuitry in the ADC will cause some offset error added into the conversion. This error will move all measurements either up or down by an equal amount. The Offset is a critical parameter for an ADC and should be specified in the datasheet for your device. For the PIC18F47K40 device the error due to offset is specified as 2 LSB. Of course if we know what the offset is we could easily subtract this from the results, so many specifications will exclude the offset error and claim that you could easily "calibrate out" the offset. This may be possible, even easy to do, but if you do not write the code for it and do the actual math you will have to include the offset error in your accuracy calculations, and measuring what the current offset is can be a real challenge in a real-world system which is not located on your laboratory bench. If you do decide to measure the offset on the factory floor and calibrate it out using software you need to be careful to use an accurate reference, avoid noise and other sources of error and make sure that the offset will remain constant over the operating range of voltage and temperature and also that it will not drift over time. If any of these are true your calibration will be met with limited success. Offset is often hard to calibrate out since many ADC's are not accurate close to the extremes (at Vref or 0V). If they were you could take a measurement with the input on Vref+ and on Vref- and determine the offset, but we knew it was never going to be this easy! The offset will also be different from device to device, so it is not possible to calibrate this out with fixed values in your code, you will have to actively measure this on every device in the factory and adjust as the offset changes. Some manufacturers will actually calibrate out the offset on an ADC for you during their manufacturing process. If this is the case you will probably see a small offset error of +-1 LSB which means that it is calibrated to be within this range. On our device the datasheet specifies a typical offset error of 0.5 LSB with a max error of 2 LSB, so this device is factory calibrated to remove the offset error, but even after this we should still expect up to 2 LSB of drift in the offset around the calibrated value. 4. Gain Error Similar to the offset the internal transfer function of the ADC is designed to be as close as possible to ideal but there is always some error. Gain error will cause the slope of the transfer function to be changed. Depending on the offset this can cause an error which is at maximum either at the top or bottom end of the measurement scale as shown in the figure below. Like the offset it is also possible to calibrate out the gain error, as long as we have enough reference points to use for the calibration. If the transfer function is perfectly linear this would mean we would only require 2 measurement points. For our device the datasheet spec is typically 0.2LSB of gain error with a max error of 1.5LSB. This means that we cannot gain much from attempting to calibrate out the gain on this one. For other manufacturers you can easily find gain and offset errors in the tens of LSB, which makes calibration and compensation for the gain and offset worth the effort. The PIC18F47K40 is not only compensated for drift with temperature but also individually calibrated in the factory, so it seems that any additional calibration measurements will be at most accurate to 1LSB and the device is already specified to typically have less than this error, so calibration will probably gain us nothing. 5. Missing Codes and DNL We expect that every time the code increments by 1LSB that the input voltage has increased by exactly 1LSB in size. For an ADC the DNL error is a measure of how close to this ideal we are in reality. It represents the largest single step error that exists for the entire range of the ADC. If the DNL is stated at 0.5LSB this means that it can take anything from 0.5LSB to 1.5LSB of input voltage change to get the output code to increment by 1. When the DNL is more than 1LSB it means that we can move the input voltage by 2LSB and only get a single count of the converter. When this happens it is possible that it causes the next bit to be squeezed down to 0LSB, which can cause the converter to skip that code entirely as shown below. Most converters will specify that the result will monotonically increase as the voltage increases and that it will have no missing codes as you scan through the range, but you still have to be careful, because this is under ideal conditions and when you add in the other errors it is possible that some codes get skipped, so when you are comparing the output of the converter never check for a specific conversion value. Always look for a value in a range around the limit you are checking. 6. Integral Nonlinearity - INL INL is another of the critical parameters for all ADC's and will be stated in your datasheet if the ADC is any good. For our example the INL is specified as 3.5LSB. The tearm INL refers to the integral of the differential nonlinearity. In effect it represents what the maximum deviation from the ideal transfer function of the ADC will be as shown in the picture below. The yellow line represents the ideal transfer function while the blue line represents the actual transfer function. As you can see the INL is defined as the size of the maximum error through the range of the ADC. Since the INL can happen at any location along the curve it is not possible to calibrate this out. It is also uncorrellated with the other errors we have examined. We just have to live with this one! 7. Sampling error A SAR ADC will consist of a sampling capacitor which holds the voltage we are converting during the conversion cycle. We must take care when we take a sample that we allow enough time for this sampling capacitor to charge to the level of accuracy we want to see in our conversion. Effectively we end up with a circuit that has some serial impedance through which the sampling capacitor is charged. The simplified circuit for the PIC18F47K40 looks as follows (from the datasheet). As you can see the series impedance (Rs) together with the sampling writch and passgate impedance (RIC + Rss) will form a low-pass RC filter to charge Chold. A detailed calculation of the sampling time required to be within 1LSB of the desired sampling value is shown in the ADC section of the device datasheet. If we leave too little time for the sample to be acquired this will directly result in a measurement error. In our case this means that if we have 10K Rs and we wait for 462us after the sampling mux turns to the input we are measuring, the capacitor will be charged to within 0.5LSB of our target voltage. The ADC on the PIC18F47L40 has a built-in circuit that can keep the sampling switch closed for us for a number of Tadc periods. This can be set by adjusting the register ADACQ or using the provided API generated by MCC to achieve this. That first inaccurate result we saw in the conversion was a direct result of the channel not being given enough time to charge the sampling cap since the acquisition time was set to the default value of 0. Of course since we are not switching channels the capacitor is closer to the correct value when to take subsequent samples so the error seems to be going away over time! I have seen customers just throw away the first ADC sample as inaccurate, but if you do not understand why you can easily get yourselfs into a lot of trouble when you need to switch channels! We can re-do the measurement and this time use an acquisition time of 4xTadc = 6.8us. This is the result. NOTE : There is another Errata on this device that you have to wait at least 1 instruction cycle before reading the ADGO bit to see if the conversion is complete after setting the ADGO bit. At first I was just doing what the datasheet suggested, set ADGO and then wait while(ADGO); for the conversion to complete. Due to this errata however the ADGO bit will still read 0 the first time you read it and you will think the conversion is done while it has not started, resulting in an ADC reading of 0 ! After adding the required NOP() to the generated MCC code as follows the incorrect first reading is gone: adc_result_t ADCC_GetSingleConversion(adcc_channel_t channel, uint8_t acquisitionDelay) { // Turn on the ADC module ADCON0bits.ADON = 1; // select the A/D channel ADPCH = channel; //Set the Acquisition Delay ADACQ = acquisitionDelay; //Disable the continuous mode. ADCON0bits.ADCONT = 0; // Start the conversion ADCON0bits.ADGO = 1; NOP(); // NOP workaround for ADGO silicon issue // Wait for the conversion to finish while (ADCON0bits.ADGO) { } // Conversion finished, return the result return (adc_result_t)(((adc_result_t)ADRESH << 8) + ADRESL); } Uncorrelated errors I will leave the full analysis up to the reader, but all of these errors are uncorrellated and thus additive, so for our case the worst case error will be when all of these errors align, the offset is in the same direction as the gain error, as the noise, as the INL error, etc. Of course when we test on the bench it is unlikely that we will encounter a situation where all of these are 100% aligned, but if we have manufactured thousands of units in the field running for years it is definitely going to happen and much more often that you would like, so we have no choice but to design for the worst-case error we are likely to see in the wild. For our exampe the different sources of error add up as follows: Voltage Reference = 4% [41 LSB] Noise [8 LSB] Offset [2.5 LSB] Gain [1.5 LSB] INL [3.5 LSB] For a total of 56.5 LSB of potential absolute error in measurement. This reduces our effective number of bits by log(56.5)/log(2) = 5.8 bits, which means that our 10-bit LSB can have absolute errors running into the 6th bit, giving us only 4 ENB (effective number of bits) when we are looking for absolute accuracy. We can improve this to 26.5 LSB by suing a 1% off-chip reference, which will make the ENB = 5 bits. If we look at the measurement we get using the Saleae we measure 0.99V on the line which should result in 0.99V/2.045V *1024 = 495 but our measurement is in fact 520, which is off by 25LSB. So as we can see our 1-board sample does not hit the worst case error at the center of the sampling range here, but our error extended at least into the 5th bit of the result as our 25LSB error requires more than 4 bits to represent. Nevertheless 25LSB is quite a bit better than the worst-case value of 56.5 LSB of error which we calculated, so this particular sample is not doing too badly! I am going to get my hands on a hair dryer in the week and take some measurements at an elevated temperature and then I will come back and update this for your reading pleasure 🙂 Comparison I recently compared some ADC's from different vendors. I was actually looking more at the other features but since I was busy with this I also noted down the specs. Not all of the datasheets were perfectly clear so do reach out to me if I made a mistake somewhere, but this was how they matched up in terms of ADC performance. As fas as I could find them I used the worst case specifications and not the typical ones. Some manufacturers only specify typical results, so this comparison is probably not fair to those who make better specifications with better information. Let me know in the comments how you feel about this. I will go over the numbers again and maybe come update all of these to typical values for a more fair comparison if someone asks me for this ... Manufacturer -> Device -> Xilinx XC7Z010 Microchip PIC32MZ EF Texas Instruments CC3220SF Espressif ESP32 ST Micro STM32L475 Renesas R65N V2 INL 2 3 2.5 12 2.5 3 DNL 1 1 4 7 1.5 2 Offset 8 2 6(1) 25(1) 2.5 3.5 Gain 0.5 8 82(1) ?(2) 4.5 3.5 Total Error (INL+Offset+Gain) 10.5 13 90.5 37+ 9.5 10 I noted that many of these manufacturers specify their ADC only at one temperature point (25C) so you probably have to dig a little deeper to ensure that the specs will not vary greatly over temperature. (1) These settings were specified in the datasheet as an absolute voltage and I converted them to LSB for max range and best resolution of the ADC. Specifically for the TI device the offset was specified as 2mV and gain error as 20mV on a 1.4V range, and for ESP32 the offset is specified as 60mV but for a wider voltage range of 2.45v (2) For ESP32 I was not able to determine the gain error clearly form the datasheet. Final Notes We can conlcude a couple of very important points from this. If the datasheet claims a 12-bit ADC we should not expect 12-bits of accuracy. First we need to calculate what to expect from our entire system, and we should expect the reference to add the most to our error. All 12-bit converters are not equal, so when comparing devices do not just look at how many bits the converters provide, also compare their performance! The same sytem can yield between 5 and 10 bits of accuracy depending on the specs of the converter, so do not be fooled! Many of the vendors specified their ADC only at a very specific temperature and reference voltage at maximum, take care not to be fooled by this - shall we call it "creative" specmanship and be sure to compare apples with apples when looking for absolute accuracy. Source Code For those who have this board or device I attach the test code I used for download here : ADC_47K40.zip
×
×
  • Create New...