Jump to content
ย 

Orunmila

Member
  • Content Count

    221
  • Joined

  • Last visited

  • Days Won

    28

Everything posted by Orunmila

  1. Sorry @dvvrao, I have edited my answer to make it more clear. When you set that bit the behavior of the uart is changed from a 16x oversampling clock to a 4x oversampling clock.
  2. I realize now that I have a new pet peeve. The widespread and blind inclusion, by Lemmings calling themselves embedded C programmers, of extern "C" in C header files everywhere. To add insult to injury, programmers (if I can call them that) who commit this atrocity tend to gratuitously leave a comment in the code claiming that this somehow makes the code "compatible" with C++ (whatever that is supposed to mean - IT DOES NOT !!!). It usually looks something like this (taken from MCC produced code - uart.h) #ifdef __cplusplus // Provide C++ Compatibility extern "C" { #endif So why does this annoy me so much? (I mean besides the point that I am unaware of any C++ compiler for the PIC16, which I generated this code for - which in itself is a hint that this has NEVER been tested on any C++ compiler ! ) First and foremost - adding something to your code which claims that it will "Provide C++ Compatibility" is like entering into a contract with the user that they can now safely expect the code to work with C++. Well I have bad news for you buddy - it takes a LOT more than wrapping everything in extern "C" to make your code "Compatible with C++" !!! If you are going to include that in your code you better know what you are doing and your code better work (as in actually being tested) with a real C++ compiler. If it does not I am going to call out the madness you perpetrated by adding something which only has value when you mix C and C++, but your code cannot be used with C++ at all in the first place - because you are probably doing a host of things which are incompatible with C++ ! In short, as they say - your header file comment is writing cheques your programming skills cannot cash. Usually a conversation about this starts with me asking the perpetrator if they can explain to me what "That thing" actually does, and when do you think you need this to use the code with a C++ compiler, so let's start there. I can count on one hand the people who have been able to explain to me how this works correctly. Extern "C" - What it does and what it is meant for This construct is used to tell a C++ compiler that it should set a property called langage linkage which affects mangling of names as well as possible calling conventions. It does not guarantee that you can link to C code compiled with a different compiler. This is a good point to quote the C++14 specification, section 7.5 on Language linkage That said, if you are e.g. using GCC for your C as well as your C++ compiler you will have a very good chance of being lucky enough that the implementation-defined linkage will be compatible! A C++ compiler will use "name mangling" to ensure that symbols with identical names can be distinguished from each other by passing additional semantic information to the linker. This becomes important when you use e.g. function overloading, or namespaces. This mangling of names is a feature of C++ (which allows duplicate names across the program, but does NOT exist in C (which does not allow duplicate symbols to be linked at all). When you place an extern "C" wrapper around the declaration of a symbol you are telling the C++ compiler to disable this name mangling feature and also to alter it's calling convention when calling a function to be more likely to link with C code, although calling conventions are a topic all in it's own. As you may have deduced by now there is a reason that this is not called "extern C++"! This is required to make your C++ calls match the names and conventions in the C object files, which does not contain mangled names. If you ARE going to place a comment next to your faux pas, at least get it right and say "For compatibility with C naming and calling conventions" instead of claiming incorrectly that this is required for C++ compatibility somehow! There is one very specific use case, one which we used to encounter all the time in what would now be called "the olden days" of Windows 3.1 when everything was shared through dynamic link libraries (.dll files). It turns out that in order to use a dll which was written using C++ from your C program you had to wrap the publicly exported function declarations in extern "C". Yes that is correct, this is used to make C++ code compatible with C, NOT THE OTHER WAY AROUND! So when do I really need this then? Let's take a small step back. If you want to use your C code with a C++ compiler, you can simply compile it all with the C++ compiler! The compiler will resolve the names just fine after mangling them all, and since you are properly using name spaces with your C++ code this will reduce the chances of name collisions and you will be all the happier for doing the right thing. No need for disabling name mangling there. If you do not need this to get your code to compile with a C++ compiler, then when DO you need it? Ahhh, now you are beginnng to see why this has become a pet peeve of mine ... but the plot still has some thickening to do before we are done... Pretty much the only case where it makes sense to do this is when you are going to compile some of your code with a C compiler, then compile some other code with a C++ compiler, and in the end take the object files from the two compilers and link them together, probably using the C++ linker. Of course when you feed your .c files and your .cpp files to a typical C++ compiler it will actually run a C compiler on the .c files and the C++ compiler on the .cpp files by default, and mangling will be an issue and you will exclaim "you see! He was wrong!", but not that fast ... it is simple enough to tell the compiler to compile all files with the C++ compiler, and this remains the best way to use C libraries in source form with your C++ code. If you are going to compile the C code into a binary library and link it in (which sometimes happens with commercial 3rd party libraries - like .dll's - DUH ! ) then there is a case for you to do this, but most likely this does not apply to you at all as you will have the source available all the time and you are working on an embedded system where you are making a single binary. To help you ease the world of hurt you are probably in for - you should read up on how to tell your compiler to use a particlar calling convention which has a chance of being compatible with the linker. If you are using GCC you can start here. To be clear, if you add extern "C" to your C code and then compiles it into an object file to be linked with your C++ program the extern "C" qualifier is entirely ignored by the C compiler. Yes that is right, all the way to producing the object file this has no effect whatsoever. It is only when you are producing calls to the C code from the C++ code that the C++ code is altered to match the C naming and calling conventions, so this means that the C++ code is altered to be compatible with C. In the end There is a hell of a lot of things you will need to consider if you want to mix C and C++. I promise that I will write another blog on that next time around if I get anybody asking for it in the comments. Adding extern "C" to your code is NOT required to make it magically "compatible with C++". In order to do that you need to heed with extreme caution the long list of incompatibilities between C and C++ and you will probably have to be much more specific than just stating C and C++. Probably something like C89 and C++11 if I had to wager a guess to what will be most relevant. And remember the reasons why it is almost always just plain stupid to use extern "C" in your C headers. It does not do what you think it does. It especially does not do what your comment claims it does - it does not provide C++ compatability! Don't let your comments write cheques your code cannot cash! - Before you even think of adding that, make sure you KNOW AND FOLLOW ALL the C++ compatability rules first. For heaven's sake TEST your code with a C++ compiler ! If you want to use a "C" library with your C++ code simply compile all the code with your C++ compiler. QED - no mess no fuss! If it is not possible to do 5 above, then compile the C code normally (without that nonsense) and place extern "C" around the #include of the C library only! (example below). After all, this is for providing C linkage compatability to C++ compiled code! If you are producing a binary library using C to be used/linked with a C++ compiler, then please save us all and just compile the library with both C and C++ and provide 2 binaries! If all of the above fail, beacuase you really just hit that one in a million case where you think you need this, then for Pete's sake educate yourself before you attempt something that hard, hopefully in the process you will realize that it is just a bad idea after all! Now please just STOP IT! I feel very much like pulling a Jeff Atwood and saying after all that only a moron would use extern "C" in their C headers (of course he was talking about using tabs). Orunmila Oh - I almost forgot - the reasonable way of using extern "C" looks like this: #include <string> // Yes <string>, not <string.h> - this is C++ ! #include <stdio> // Include C libraries extern "C" { #include "clib1.h" #include "clib2.h" } using namespace std; // Because the cuticles on my typing finger hurt if I have to type out "std::" all the time! // This is a proper C++ class because I am too patrician to use "C" like that peasant library writer! class myClass { private: int myPrivateInt; ... ... Appeals to Authority Dan Saks CppCon talk entitled โ€œextern c: Talking to C Programmers about C++โ€ A very good answer on Stackoverflow Some decent answers on this Stackoverflow question Fairly OK post on geeksforgeeks The dynamic loading use case with examples Note: That first one is a bit of a red herring as it does not explain extern C at all - nevertheless it is a great talk which all embedded programmers can benefit from ๐Ÿ™‚
  3. Depends on what you need of course ๐Ÿ™‚ You need a repository for everything that will become a stand-alone project/product, something you would like to give a version number. Branches are temporary things that you will delete. Forks are new projects which are unique projects themselves, based on another project, like a fork in the road it takes you to a new destination. Looking at your project I think you should not have 2 branches. If you want this to be 1 project/product you should merge the branches so that you can use the shared code and use either separate folders or defines to get your code to cross-compile. If you want them to be 2 separate projects you should make a second repo with the other project. Just a side-note. Cut and paste is a perfectly reasonable way to re-use code. If you make one repo which cross compiles you are not just sharing code, your projects actually have a hard dependency on each other, which mean you can never change the one without profoundly affecting the other one. This kind of "marriage" of projects is often not the intent of sharing code, so take care to not end up with unwanted dependencies.
  4. Yes you Peripheral clock is definitely 48MHz which means with a BRGH=1 you will only be able to go up to 6MHz, if you change the clock to 96MHz you can generate up to 12MHz BUT you should beware that when you have BRGH=1 the formula is dividing by 4 becuase the oversampling clock is only running at 4x and this can be notoriously unreliable. For more details see this discussion, in summary if you oversample 16x it leaves you with an error budget of 2.03%. When you over sample only 4x the error budged narrows to 0.52% and achieving that accuracy means you will have to drive the lines pretty hard to reduce the slew rate and also match the impedance to prevent ringing if you want to communicate reliably! BTW. note that I used a 10% rise time and fall time for that calculation in the discussion. At a baud rate of 12MHz that means a rise time of 8.3ns, and at 24MHz you will be down to 4.15ns of allowed rise and settling time on the line to sample it accurately. Better use a scope to check if your full system can achieve this, if not you will be able to send but you will not be able to receive successfully.
  5. Follow up: From George's response on the Microchip Forum this was verified as a bug in the simulator DMA implementation. He also posted that this "should" be fixed in MPLAB X 5.20, which means it may end up only being included in 5.25 Response on the forum was this:
  6. How do you compile that? That #define does not look like "normal" ASM if there is such a thing ... perhaps I should say how do I find out more about writing ASM code for this device, assembling it into machine code and running it. Can I set breakpoints etc.? Sorry I do not usually work with ASM. ๐Ÿ™‚
  7. Please feel free to upload code, pictures, videos, whatever you need to make it clear over here, that is the reason we made this forum, to overcome exactly that problem, so thank you for doing that! George writes the simulator, so there is no better person to help you!
  8. Ok, got some feedback that the REAL bug is that the code compiled in v2.10, this "bug" was fixed in v2.15. The way to get the behavior back to matching v2.10 is to pass the linker flag "--allow-multiple-definition" which will allow inline funcitons declared in header files included in multiple compilation units to link successfully. Of course if you do it that way (place the implementation in the header) then the code SHOULD only compile when you have used "static inline", if you just placed inline, like I did in my example, it is not supposed to compile, and as I discovered it also does not compile on GCC or LLVM. When you use "static inline" correctly it compiles on all the mentioned versions of all the mentioned compilers, because the code is then correct ๐Ÿ™‚
  9. Of course the answer here is that the semantics for inline in C89 do not exist (there is no such thing as inline), in GNU89 you can define a C function in a C file and place inline on it as a hint for the optimizer, BUT in C99 and C11 this is illegal, in C99 and C11 the standard specifies that the only legal way to use inline is to have the function body in the header file ... Read this for some background: https://gustedt.wordpress.com/2010/11/29/myth-and-reality-about-inline-in-c99/ This is the relevant part "So youโ€™d have to put the implementation of such candidate functions in header files." EDIT: To be super pedantic about it, the standard does not exactly say you must place the definition in the header file, but if you want to use the function from more than one C file you will have to do exactly that for it to work.
  10. I mentioned in my first response that this is incorrect, I tried it and it works on XC32 but it should not work according to the standard, so the implemenation seems to be incorrect. Most importantly although this works on XC32, it does NOT work on any other compiler in my project. I need a solution which works on all compilers, just FYI my project runs on 12 different compilers, so making it work on XC32 is less than 10% of my problem here ... So we originally used GNU89 semantics for inline, which means it works as long as all compilers can handle this. GCC can handle this if you pass in the command line switch -fgnu89-inline OR if you set the C dialect to be GNU89. Both cases should work, and does in v2.10. The problem is that in v2.15 neither of these options work, so it is now impossible to use GNU89 semantics for inline by either method. This means the compiler now does C99 or C89 only and can no longer support GNU89 inlining, which is a little strange as GCC, which it is based on, happily handles this...
  11. Quick look I see that DMA is supported and so is UART, so I would expect that this would be covered. Will get some time on the weekend to try this out and will let you know if I find something interesting. I am sure there are others on here who have more experience at this and can just throw up an answer though ๐Ÿ™‚
  12. Ok, now I am sleuthing to figure out what is exactly going on. At first I thought that the problem was just that -fgnu89-inline is being ignored, but the plot thickens! When I remove the std as well as the -fgnu89-inline from the project I attached above it still builds fine on v2.10 and fails on v2.15, so they are not only ignoring the setting, they are also defaulting things differently ... At first I suspected that the default mode was changed from GNU89 to C99 in this compiler version, but when I check for it __STDC_VERSION__ is not defined when trying to compile this with either compiler, which means it is not defaulting suddenly to C99. This leaves me with thinking that it may have changed the default from GNU89 to GNU99? So my first test was to pass -std=c99 to v2.10, and yes it fails identically to the v2.15 with no standard specification, so I thought this makes sense. I think compiled on v2.10 with -std=gnu89 and as expected it compiles fine, but then I tried to compile on v2.15 with -std=gnu89 and nope, inline does not work with gnu89 semantics either. So it seems like they are not in fact ignoring the -fgnu89-inline flag, it seems that the implementation for that has somehow been ripped out of the heart of the compiler, because even if you try and force it to use gnu89 as the C dialect it fails on v2.15... I will wait and see if we can get an answer from Microchip on this, and update here if I do.
  13. OMG you are a genius! That was it! I carefully read the release notes and this was not mentioned at all. Very disappointing, this cost me a couple of hours of struggling today and someone else here has an open issue with Microchip support to try and get a Harmony 2.05 project to build with the new compiler and have not manageed to resolve their issue in a week now, so this is costing customers as lot of time and we all know time is money! I will go ahead and change my code to C99 semantics for inline, and I guess I will have to also update the Harmony code myself if I want to use the latest compiler. Sigh ... I was just busy testing, as promised in my update above, on LLVM when you posted that btw. Of course since I was using G89 semantics for inline the project was not compiling on LLVM either, except there it was complaining that there was no definition for the symbol instead of complaining about a duplicate definition. When I changed the inline definition to extern inline in my header the error swapped around. Now LLVM was complaining that the symbol was no longer missing, but I now had a duplicate symbol. Strangely this compiled just fine on XC32? Had to brush up on my C99 inline semantics a bit and found that I probably want static inline, and yes that compiles on both LLVM and XC32, so I guess I am going to go with that for now. BTW. The release notes contain only this for XC32 v2.15: New Features in MPLABยฎ XC32 v2.15 New Part Support -- This release introduces initial support for the MEC15xx family of Embedded Controllers as well as the ATSAMx7, ATSAME54/D51, and ATSAMC2x/D2x families of 32-bit microcontrollers. Tightly-Coupled Memory (TCM) on SAMx7 MCUs -- The SAM family features a low-latency SRAM interface called Tightly-Coupled Memory (TCM). To support this interface, XC32 provides a new tcm attribute. You can apply this attribute to a function or variable and it will be placed into instruction or data TCM as appropriate. (e.g. uint32_t __attribute__((tcm)) var;) To enable TCM, pass the -mitcm=<size_in_bytes> and the -mdtcm=<size_in_bytes> options to xc32-gcc/g++ both when compiling and when linking. (See the device datasheet for the size values supported by your target device.) The device-specific startup code and the device-specific linker script then work together to set up, initialize, and enable TCM at startup, before your main() function is called. With this option enabled, the linker allocates the vector table to ITCM, improving both interrupt latency and latency determinism. Also for improved determinism, you may also choose to move your stack to DTCM by passing the -mstack-in-tcm option to xc32-gcc/g++ at compile and link time. The linker will allocate a stack to DTCM and the startup code will transfer the stack from System SRAM to DTCM before calling your main() function. Issues Fixed -- See the Fixed Issues section for information on bug fixes addressed in this release. 64-bit executables coming soon -- A future release of XC32 will be provided only as 64-bit executables for Windows x64, Linux x64, and MacOS x64. We will no longer provide 32-bit executables.
  14. I just downloaded XC32 V2.15, I was using V2.10 before. I find that some of my projects no longer compile. On my first check I noticed that the problems seem to occur when inline functions are used and the same header where the inline implementation is done is included in more than one compilation unit? Has any of you seen similar issues? I will investigate further and post here if I arrive at an answer. UPDATE: Ok, I managed to make a small test project to replicate the problem. I am attaching it here. TestInlineXC32_2.15.zip Next I am going to test this on some other compilers to see what the deal is. I have confirmed that with that project when you switch it to V2.10 or older it all compiles just fine, but if you use V2.15 it failes to link with the following error: "/Applications/microchip/xc32/v2.15/bin/xc32-gcc" -mprocessor=32MZ2048EFM100 -o dist/default/production/TestInlineXC32_2.15.X.production.elf build/default/production/main.o build/default/production/otherFile.o -DXPRJ_default=default -legacy-libc -Wl,--defsym=__MPLAB_BUILD=1,--no-code-in-dinit,--no-dinit-in-serial-mem,-Map="dist/default/production/TestInlineXC32_2.15.X.production.map",--memorysummary,dist/default/production/memoryfile.xml nbproject/Makefile-default.mk:151: recipe for target 'dist/default/production/TestInlineXC32_2.15.X.production.hex' failed make[2]: Leaving directory '/Users/ejacobus/MPLABXProjects/TestInlineXC32_2.15.X' nbproject/Makefile-default.mk:90: recipe for target '.build-conf' failed make[1]: Leaving directory '/Users/ejacobus/MPLABXProjects/TestInlineXC32_2.15.X' nbproject/Makefile-impl.mk:39: recipe for target '.build-impl' failed build/default/production/otherFile.o: In function `myInlineFunction': /Users/ejacobus/MPLABXProjects/TestInlineXC32_2.15.X/inlinedheader.h:6: multiple definition of `myInlineFunction' build/default/production/main.o:/Users/ejacobus/MPLABXProjects/TestInlineXC32_2.15.X/inlinedheader.h:6: first defined here /Applications/microchip/xc32/v2.15/bin/bin/gcc/pic32mx/4.8.3/../../../../bin/pic32m-ld: Link terminated due to previous error(s). collect2: error: ld returned 255 exit status make[2]: *** [dist/default/production/TestInlineXC32_2.15.X.production.hex] Error 255 make[1]: *** [.build-conf] Error 2 make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 680ms)
  15. Melvin Conway quipped the phrase back in 1967 that "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations." Over the decades this old adage has proven to be quite accurate and this has become known as "Conway's Law". Researchers from MIT and Harvard have since shown that there is strong evidence for this correllation, they called it the "The Mirroring Hypothesis". When you read "The mythical man month" by Fred Brooks we see that we already knew back in the seventies that there is no silver bullet when it comes to Software Engineering, and that the reason for this is essentially the complexity of software and how we deal with it. It turns out that adding more people to a software project increases the number of people we need to communicate with and the number of people who need to understand it. When we just make one big team where everyone has to communicate with everyone the code tends to reflect this structure. As we can see the more people we add into a team the more the structure quickly starts to resemble something we all know all too well! When we follow the age old technique of divide and concquer, making small Agile teams that each work on a part of the code which is their single responsibility, it turns out that we end up getting encapsulation and modularity with dependencies managed between the modules. No wonder the world is embracing agile everywhere nowadays! You can of course do your own research on this, here are some org carts of some well known companies out there you can use to check the hypothesis for yourself!
  16. Orunmila

    xc8 gcc clang

    If you are using cmake why not use xc8 with that and avoid the IDE all together? The pro version is pretty good, and the free version is good enough to prove concepts and evaluate with. Besides you can get an evaluation license for the pro version either way...
  17. Well if you just replace the nasty characters you do not need a state machine, but if you want to hold off on sending SECURE: instead of just replacing a \n with that you will need a 2-state machine, hardly worthy of the name, ๐Ÿ˜‰ we should call it a flag then ...
  18. I was not suggesting you filter the format strings, I was suggesting that you filter the output ๐Ÿ˜‰
  19. printf will bind to stdio by simply calling putchar repeatedly. If your user will supply the data there is no risk if you run the code in Trustzone as they will not be able to access any data maliciously without causing a hardware fault. So this means your only remaining problem is ensuring that the user does not print backspace characters or \r\n characters, so you can simply remove/ignore backspace and replace every \r\n with "\r\nSECURE:" and you should be good? This can all be done quite safely inside of the implementation of putchar, and you can run that inside of Trustzone (which I have not used myself so I do not know the details of the limitations)
  20. What every embedded programmer should know about ADC measurement, accuracy and sources of error ADC's you encounter will typically be specified as 8, 10 or 12-bit. This is however rarely the accuracy that you should expect from your ADC. It seems counter-intuitive at first, but once you understand what goes on under the hood this will be much clearer. What I am going to do today is take a simple evaluation board for the PIC18F47K40 (MPLABยฎ Xpress PIC18F47K40 Evaluation Board ) and determine empirically (through experiments and actual measurements) just how accurate you should expect ADC measurements to be. Feel free to skip ahead to a specific section if you already know the basics. Here is a summary of what we will cover with links for the cheaters. Units of Measurement of Errors Measurement Setup Sources and Magnitude of Errors Voltage Reference Noise Offset Gain Error Missing Codes and DNL Integral Nonlinearity (INL) Sampling Error Adding up Errors Vendor Comparison Final Notes Units of Measurement of Errors When we talk about ADC's you will often see the term LSB used. This term refers to the voltage represented by the least significant bit of the ADC, so in reality it is the voltage for which you should read 1 on the ADC. This is a convenient measure for ADC's since the reference voltage is often not fixed and the size of 1 LSB in volts will depend on what you have the reference at, while most errors caused by the transfer function will scale with the reference. For a 10-bit ADC with 3v3 of range one LSB will be 3.3/(2^10) = 3.3/1024 = 3.22mV. An error of 1% on a 10-bit converter would represent 1%*1024 = 10.24x the size of one LSB, so we will refer to this as 10LSB of error, which means our measurement could be off by 32.2mV or ten times the size of 1 LSB. When I have 10 LSB in error I really should be rounding my results to the nearest 10 LSB, since the least significant bits of my measurement will be corrupted by this error. 10LSB will take 3.32 bits to represent. This means that my lowest 3 bits are possibly incorrect and I can only be confident in the values represented by the 7 most significant bits of my result. This means that the effective number of bits (ENOB) for my system is only 7, even though my ADC is taking a 10-bit measurement. The lower 3 bits are affected by the measurement error and cannot be relied upon so they should be discarded if I am trying to make an absolute voltage measurement accurately. We can always work out exactly how many bits of accuracy we are losing, or to how many bits we need to round to, using the calculation: log(#LSB error)/log(2) Note that this calculation will give us fractional numbers of bits. If we have 10LSB error the error does not quite affect a full 4 bits (that happens only at 16LSB), but we can not say it removes only 3 bits, because that already happened at 8LSB, so this is somewhere in between. In order to compare errors meaningfully we will work with fractions of bits in these cases, so 10LSB of error reduces our accuracy by 3.32 bits. This is especially useful when errors are additive because we can add up all the fractional LSB of errors to get the total error to the nearest bit. At this point I would like to encourage you to take your oscilloscope and try to measure how much noise you can detect on your lines. You will probably be surprized that most desk oscilloscopes can only measure signals down to 20mV, which means that 1LSB on a 10-bit ADC with a 3V3 reference will be close to 10x smaller than the smallest signal your digital scope can measure! If you can see noise on the scope (which you probably can) then that means it is probably at least 20mV or 10LSB of error. It turns out that our intuition about how accurate an ADC should be, as well as how accurate our scope can measure is seldom correct ... Measurement Setup I am using my trusty Saleae Logic Pro 8 today. It has a 12-bit ADC and measures +-10V on the analog channel and is calibrated to be accurate to between 9 and 10 ENOB of absolute accuracy. This means that 1LSB of error will be roughly 4.8mV, which for my 2V system with a 10-bit ADC is already the size of 2LSB of measurement error. When I ground the Saleae input and take a measurement we can see how much noise to expect on the input during our measurements. As you will see later we actually want to see 2-3LSB of noise so that we can improve accuracy by digital filtering, if you do not have enough noise this is not possible, so this looks really good. Using the software to find the maximum variation for me you can see that I have about 15.64mV of noise on my line. Since the range is +-10V this is only 15.6/20000 = 0.08% of error, but this is going to be, for my target 2V range, 15.6/2048*1024 = 8LSB of error to start with on my measurement equipment! For an experiment we are going to need an analog voltage source to measure using the ADC. It so happens that this device has a DAC, so why not just use that! You would think that this was a no-brainer, but it turns out, as always, that it is not quite as simple as that would seem! What I will do first is set the DAC and ADC to use the same reference (this has the added benefit that Vref inaccuracy will be cancelled out, nice!).We expect that if we set the DAC to give us 1.024V (50% of full range) and we then measure this using the 10-bit ADC, that we would measure half of the ADC range, or 512 right? For the test I made a simple program that will just measure the ADC every 1 second and print the result to the UART. Well here is the result of the measurement (to the right). Not what you expected ?! Not only are the 1st two readings appallingly bad, but the average seems to be 717, which is a full 40% more than we expect! How is this possible? Well this is how. Not only is the ADC inaccurate here, but the DAC even more so! The DAC is only 5 bits and it is specified to be accurate to 5LSB. That is already a full 320mV of error, but that is still not nearly enough to explain why we are measuring 717/1024*2.048 = 1.434V instead of 1.024V... So what is really going on here? To see I connected my trusty Saleae and changed the application to cycle the DAC though all 32 values, 1s per value, and make a plot for us to look at. On the Saleae we see this. It turns out that the DAC is such a weak source that anything you connect to it's output (like even an ADC input leakage or simply an I/O pin with nothing connected to it!), will load down the DAC and skew the output! This has been the cause of consternation for many a soul actually (see e.g. this post on the Microchip forum) Wow, so that makes sense, but is there anything we can do about this? On this device unfortunately there is not much we can do. There are devices with on-board op-amps you can use to buffer the DAC output like the 16F170x family, but this device does not have op-amps so we are out of luck! I will blog all about DAC's and about what the reasons for this shape is on another occasion, this blog is about ADC after all! So all I am going to do is adjust the DAC setting to give us about the voltage we need by measuring the result using the Saleae and call it a day. Turns out I needed to subtract 6 from the output setting to get close. We now see a measurement of 520 and this is what we see while taking measurements with the Saleae. 10.37mV of noise on just about 1V and we are in business! Sources and Magnitude of Error When measurement errors are uncorrellated it means that they will all add up to form the total worst case error. For example if I have 2LSB of noise, and I also have 2LSB of reference error this means the reading can be 2LSB removed from the correct value as a result of the reference, and an additional 2LSB as a result of the noise, to give 4LSB of total error. This means that these two types of errors are not correllated and the bits contributed by each to the total error are additive. At this point I want to mention that customers often come to me demanding a 16bit-ADC because they feel that the 10-bit one they have is not adequate for their application. They can seldom explain to me why they need 31uV of accuracy or what advanced layout techniques they are applying to keep the noise levels to even remotely this range, and most of the time the real problem turns out to be that their 10bit-ADC is really being used so badly that they are hardly getting 5 bits of accuracy from the converter. I also often see calculations which effectively discard the lower 4 bits of the ADC measurement, leaving you with only 8-bits of effective measurement, so if you do that getting more bits in the ADC is obviously only going to buy you only disappointment! That said, lets look at all of the most significant sources of error one by one in more detail. There are quite a few so we will give them headings and numbers. 1. Voltage Reference To get us started, lets look at the voltage reference and see how many LSB this contributes to our measurement error. If you are using a 1% reference, then please do not insist that you need a 16-bit or even a 12-bit ADC, because your reference alone is contributing errors into the 7th most significant bit and 8 bits is all you are going to get anyway! The datasheet for our evaluation board chip (PIC18F47K40) shows that the voltage reference will be accurate to +-4% when we set it to 2.048V like we did. People are always surprized when they realize how many LSB they are losing due to the voltage reference! 4% of 1024 = 51.2, which means that the reference alone can contribute up to 51.2 LSB of error to our system! Using an expensive off-chip reference also complicates things for us. Using that we would now have to be very careful with the layout to not introduce noise at the reference pin, and also take care of any signals coupling into this pin. Even then the reference will likely be something like a TL431 which will only be accurate to 1% which will be 10 LSB reducing our 10-bit ADC to less than 8 ENOB. We must note that reference errors are not equally distributed. At the maximum scale 1% represents 10LSB of error, but at the lower end of the scale 1% will represent only 1% of 1LSB. Since we are looking for the worst-case error we have to work with 10LSB due to the 1% error over the full ADC range. In your application you may be able to adjust the contribution of this error down to better represent the range you are expecting to measure. For example - at mid range, where our test signal is, the reference error will only contribute 5LSB of error with a 1% reference, or 25LSB for our 4% internal reference. The reference error is something which we could calibrate out if we knew what the error was and many manufacturers discard it stating simply that you should calibrate it out. Sadly these references tend to drift over time, temperature and supply voltages, so you usually cannot just calibrate it in the factory and compensate for the error in software and forget it. To revisit our 16-bit ADC scenario, if I want to measure accurately to 31uV (16 bits on a 2V reference) that reference would have to be accurate to 31uV/2V = 0.0015%. Let's look on Digikey for a price on a Voltage reference with the best specs we can find. Best candidate I can find is this one at $128.31 a piece, and even that gives me only 0.1% with up to 0.6ppm/C drift. This means from 0 to 100C I will have 0.006% of temp drift (2LSB) on top of the 0.1% tolerance (which is another 33LSB). Now to be fair if I am building a control system I am more interested in perturbations from a setpoint and a 16-bit ADC may be valuable even if my reference is off, because I am not trying to take an absolute measurement, but still maintaining noise levels below 30uV is more of a challenge than it sounds, especially if I am driving some power which adds noise to the equation. This is of course the difference between accuracy and resolution. Accuracy gives me the minimum absolute error while resolution gives me the smallest relative unit of measure. 2. Noise Noise is of course the problem we all expect. It can often be a pretty large contributor to your measurement errors, and digital circuits are known for producing lots of noise that will couple into your inputs, but as we will see noise is not all bad and can be essential if you want to improve the results through digital post processing. We have seen that every 2mV of noise will add 1LSB to the error on our system as we have a 2V reference, and 1024 steps of measurement. As you have now seen this 2mV is probably much smaller than we can measure with a typical oscilloscope, so we cannot be sure how much noise we really have if we simply look at it on our scope. For most systems the recommendation would be to place the microcontroller in lowest power sleep mode and avoid toggling any output pins during the sampling of the ADC measurement to get the measurement with the lowest noise level. A simple experiment will show how much noise we could be coupling into the measurement when an adjacent pin is being toggled. I updated our program from before to simply toggle the pin next to the ADC input constantly and measured with the Saleae to see what the effect is. On the left is the signal zoomed out and on the right is one of the transitions zoomed in so you can get a better look. That glitch on the measurement line is 150mV or 75 LSB of noise due to an adjacent pin toggling, and the dev board I have does not even have long traces which would have made this much worse! It seems like a good idea to filter all this noise using analog low-pass filters like filter capacitors, but this is not always wise. We can make small amounts of noise work to our advantage, as long as it is white noise which is uncorrellated with our signal and other errors. When we do post-processing like taking multiple samples and averaging the result we can potentially increase the overall accuracy of our measurement. Using this technique it is possible to increase the ENOB (effective number of bits) of your measurements by simply taking more samples and averaging them. Without getting too deep into the math there, if you oversample a signal by a factor of N you will improve the SNR by a factor of sqrt(N), which means oversampling 256 times and taking the average will result in an increase of 16x the SNR, which represents an additional 4 bits of resolution of the ADC. Of course this is where having uncorrellated white noise of at least +-1LSB is important. If you have no noise on your signal you would likely just sample the same value 256 times and the average would not add any improvement to the resolution. If you had white noise added to the signal however you would sample a variety of values with the average lying somewhere in between the LSB you can measure, and the ratio of difference would represent the value of the signal more accurately. For a detailed discussion on this topic you can take a look at this application note by Silicon Labs https://www.silabs.com/documents/public/application-notes/an118.pdf 3. Offset The internal circuitry in the ADC will cause some offset error added into the conversion. This error will move all measurements either up or down by an equal amount. The Offset is a critical parameter for an ADC and should be specified in the datasheet for your device. For the PIC18F47K40 device the error due to offset is specified as 2 LSB. Of course if we know what the offset is we could easily subtract this from the results, so many specifications will exclude the offset error and claim that you could easily "calibrate out" the offset. This may be possible, even easy to do, but if you do not write the code for it and do the actual math you will have to include the offset error in your accuracy calculations, and measuring what the current offset is can be a real challenge in a real-world system which is not located on your laboratory bench. If you do decide to measure the offset on the factory floor and calibrate it out using software you need to be careful to use an accurate reference, avoid noise and other sources of error and make sure that the offset will remain constant over the operating range of voltage and temperature and also that it will not drift over time. If any of these are true your calibration will be met with limited success. Offset is often hard to calibrate out since many ADC's are not accurate close to the extremes (at Vref or 0V). If they were you could take a measurement with the input on Vref+ and on Vref- and determine the offset, but we knew it was never going to be this easy! The offset will also be different from device to device, so it is not possible to calibrate this out with fixed values in your code, you will have to actively measure this on every device in the factory and adjust as the offset changes. Some manufacturers will actually calibrate out the offset on an ADC for you during their manufacturing process. If this is the case you will probably see a small offset error of +-1 LSB which means that it is calibrated to be within this range. On our device the datasheet specifies a typical offset error of 0.5 LSB with a max error of 2 LSB, so this device is factory calibrated to remove the offset error, but even after this we should still expect up to 2 LSB of drift in the offset around the calibrated value. 4. Gain Error Similar to the offset the internal transfer function of the ADC is designed to be as close as possible to ideal but there is always some error. Gain error will cause the slope of the transfer function to be changed. Depending on the offset this can cause an error which is at maximum either at the top or bottom end of the measurement scale as shown in the figure below. Like the offset it is also possible to calibrate out the gain error, as long as we have enough reference points to use for the calibration. If the transfer function is perfectly linear this would mean we would only require 2 measurement points. For our device the datasheet spec is typically 0.2LSB of gain error with a max error of 1.5LSB. This means that we cannot gain much from attempting to calibrate out the gain on this one. For other manufacturers you can easily find gain and offset errors in the tens of LSB, which makes calibration and compensation for the gain and offset worth the effort. The PIC18F47K40 is not only compensated for drift with temperature but also individually calibrated in the factory, so it seems that any additional calibration measurements will be at most accurate to 1LSB and the device is already specified to typically have less than this error, so calibration will probably gain us nothing. 5. Missing Codes and DNL We expect that every time the code increments by 1LSB that the input voltage has increased by exactly 1LSB in size. For an ADC the DNL error is a measure of how close to this ideal we are in reality. It represents the largest single step error that exists for the entire range of the ADC. If the DNL is stated at 0.5LSB this means that it can take anything from 0.5LSB to 1.5LSB of input voltage change to get the output code to increment by 1. When the DNL is more than 1LSB it means that we can move the input voltage by 2LSB and only get a single count of the converter. When this happens it is possible that it causes the next bit to be squeezed down to 0LSB, which can cause the converter to skip that code entirely as shown below. Most converters will specify that the result will monotonically increase as the voltage increases and that it will have no missing codes as you scan through the range, but you still have to be careful, because this is under ideal conditions and when you add in the other errors it is possible that some codes get skipped, so when you are comparing the output of the converter never check for a specific conversion value. Always look for a value in a range around the limit you are checking. 6. Integral Nonlinearity - INL INL is another of the critical parameters for all ADC's and will be stated in your datasheet if the ADC is any good. For our example the INL is specified as 3.5LSB. The tearm INL refers to the integral of the differential nonlinearity. In effect it represents what the maximum deviation from the ideal transfer function of the ADC will be as shown in the picture below. The yellow line represents the ideal transfer function while the blue line represents the actual transfer function. As you can see the INL is defined as the size of the maximum error through the range of the ADC. Since the INL can happen at any location along the curve it is not possible to calibrate this out. It is also uncorrellated with the other errors we have examined. We just have to live with this one! 7. Sampling error A SAR ADC will consist of a sampling capacitor which holds the voltage we are converting during the conversion cycle. We must take care when we take a sample that we allow enough time for this sampling capacitor to charge to the level of accuracy we want to see in our conversion. Effectively we end up with a circuit that has some serial impedance through which the sampling capacitor is charged. The simplified circuit for the PIC18F47K40 looks as follows (from the datasheet). As you can see the series impedance (Rs) together with the sampling writch and passgate impedance (RIC + Rss) will form a low-pass RC filter to charge Chold. A detailed calculation of the sampling time required to be within 1LSB of the desired sampling value is shown in the ADC section of the device datasheet. If we leave too little time for the sample to be acquired this will directly result in a measurement error. In our case this means that if we have 10K Rs and we wait for 462us after the sampling mux turns to the input we are measuring, the capacitor will be charged to within 0.5LSB of our target voltage. The ADC on the PIC18F47L40 has a built-in circuit that can keep the sampling switch closed for us for a number of Tadc periods. This can be set by adjusting the register ADACQ or using the provided API generated by MCC to achieve this. That first inaccurate result we saw in the conversion was a direct result of the channel not being given enough time to charge the sampling cap since the acquisition time was set to the default value of 0. Of course since we are not switching channels the capacitor is closer to the correct value when to take subsequent samples so the error seems to be going away over time! I have seen customers just throw away the first ADC sample as inaccurate, but if you do not understand why you can easily get yourselfs into a lot of trouble when you need to switch channels! We can re-do the measurement and this time use an acquisition time of 4xTadc = 6.8us. This is the result. NOTE : There is another Errata on this device that you have to wait at least 1 instruction cycle before reading the ADGO bit to see if the conversion is complete after setting the ADGO bit. At first I was just doing what the datasheet suggested, set ADGO and then wait while(ADGO); for the conversion to complete. Due to this errata however the ADGO bit will still read 0 the first time you read it and you will think the conversion is done while it has not started, resulting in an ADC reading of 0 ! After adding the required NOP() to the generated MCC code as follows the incorrect first reading is gone: adc_result_t ADCC_GetSingleConversion(adcc_channel_t channel, uint8_t acquisitionDelay) { // Turn on the ADC module ADCON0bits.ADON = 1; // select the A/D channel ADPCH = channel; //Set the Acquisition Delay ADACQ = acquisitionDelay; //Disable the continuous mode. ADCON0bits.ADCONT = 0; // Start the conversion ADCON0bits.ADGO = 1; NOP(); // NOP workaround for ADGO silicon issue // Wait for the conversion to finish while (ADCON0bits.ADGO) { } // Conversion finished, return the result return (adc_result_t)(((adc_result_t)ADRESH << 8) + ADRESL); } Uncorrelated errors I will leave the full analysis up to the reader, but all of these errors are uncorrellated and thus additive, so for our case the worst case error will be when all of these errors align, the offset is in the same direction as the gain error, as the noise, as the INL error, etc. Of course when we test on the bench it is unlikely that we will encounter a situation where all of these are 100% aligned, but if we have manufactured thousands of units in the field running for years it is definitely going to happen and much more often that you would like, so we have no choice but to design for the worst-case error we are likely to see in the wild. For our exampe the different sources of error add up as follows: Voltage Reference = 4% [41 LSB] Noise [8 LSB] Offset [2.5 LSB] Gain [1.5 LSB] INL [3.5 LSB] For a total of 56.5 LSB of potential absolute error in measurement. This reduces our effective number of bits by log(56.5)/log(2) = 5.8 bits, which means that our 10-bit LSB can have absolute errors running into the 6th bit, giving us only 4 ENB (effective number of bits) when we are looking for absolute accuracy. We can improve this to 26.5 LSB by suing a 1% off-chip reference, which will make the ENB = 5 bits. If we look at the measurement we get using the Saleae we measure 0.99V on the line which should result in 0.99V/2.045V *1024 = 495 but our measurement is in fact 520, which is off by 25LSB. So as we can see our 1-board sample does not hit the worst case error at the center of the sampling range here, but our error extended at least into the 5th bit of the result as our 25LSB error requires more than 4 bits to represent. Nevertheless 25LSB is quite a bit better than the worst-case value of 56.5 LSB of error which we calculated, so this particular sample is not doing too badly! I am going to get my hands on a hair dryer in the week and take some measurements at an elevated temperature and then I will come back and update this for your reading pleasure ๐Ÿ™‚ Comparison I recently compared some ADC's from different vendors. I was actually looking more at the other features but since I was busy with this I also noted down the specs. Not all of the datasheets were perfectly clear so do reach out to me if I made a mistake somewhere, but this was how they matched up in terms of ADC performance. As fas as I could find them I used the worst case specifications and not the typical ones. Some manufacturers only specify typical results, so this comparison is probably not fair to those who make better specifications with better information. Let me know in the comments how you feel about this. I will go over the numbers again and maybe come update all of these to typical values for a more fair comparison if someone asks me for this ... Manufacturer -> Device -> Xilinx XC7Z010 Microchip PIC32MZ EF Texas Instruments CC3220SF Espressif ESP32 ST Micro STM32L475 Renesas R65N V2 INL 2 3 2.5 12 2.5 3 DNL 1 1 4 7 1.5 2 Offset 8 2 6(1) 25(1) 2.5 3.5 Gain 0.5 8 82(1) ?(2) 4.5 3.5 Total Error (INL+Offset+Gain) 10.5 13 90.5 37+ 9.5 10 I noted that many of these manufacturers specify their ADC only at one temperature point (25C) so you probably have to dig a little deeper to ensure that the specs will not vary greatly over temperature. (1) These settings were specified in the datasheet as an absolute voltage and I converted them to LSB for max range and best resolution of the ADC. Specifically for the TI device the offset was specified as 2mV and gain error as 20mV on a 1.4V range, and for ESP32 the offset is specified as 60mV but for a wider voltage range of 2.45v (2) For ESP32 I was not able to determine the gain error clearly form the datasheet. Final Notes We can conlcude a couple of very important points from this. If the datasheet claims a 12-bit ADC we should not expect 12-bits of accuracy. First we need to calculate what to expect from our entire system, and we should expect the reference to add the most to our error. All 12-bit converters are not equal, so when comparing devices do not just look at how many bits the converters provide, also compare their performance! The same sytem can yield between 5 and 10 bits of accuracy depending on the specs of the converter, so do not be fooled! Many of the vendors specified their ADC only at a very specific temperature and reference voltage at maximum, take care not to be fooled by this - shall we call it "creative" specmanship and be sure to compare apples with apples when looking for absolute accuracy. Source Code For those who have this board or device I attach the test code I used for download here : ADC_47K40.zip
  21. I am not clear on what you are trying to achieve here? The reality is that even if you give a secure wrapper anyone could still call the insecure printf directly. If they wanted to be malicious in this case they could just do a call with an address of your function incremented by a couple of instructions to skip your checks? I think the way to make it "secure" in a way is for you to simply process the format strings and limit what can be passed into these. If you do a secure function protected by trustzone or something you need to make a function which has the format string either fixed or severely limited. That said, all of this is not going to help you if they use a side-channel attack to just read all of memory ...
  22. If you have purchased a "MPLAB(R) Xpress PIC18F47K40 Evaluation Board" from Microchip (part number DM182027) and you are running into difficulty because the board is behaving strangely it is most likely caused by a silicon errata on this device! The errata can be downloaded here: http://ww1.microchip.com/downloads/en/DeviceDoc/PIC18F27-47K40-Silicon-Errata-and-Data-Sheet-Clarification-80000713E.pdf The relevant section of the Errata is shown at the end. What is happening is that the compiler is using a TBLRD instruction somewhere and this instruction is not behaving as expected due to a silicon bug in REV A2 of the PIC18F47K40, causing the read to fail and the program to malfunction. Typically this happens as part of the C initialization code generated by the XC8 compiler, and since the compiler is optimizing, changing the code may cause the problem to temporarily disappear because you have few enough global variables that a table read is no longer the fastest way to initialize the memory segment for variables with static linkage. The XC8 compiler can avoid generating the sequence which will cause the failure if you tell it in the linker settings to implement the workaround for this Errata. This is done by adding +NVMREG to the setting as follows. Note that this is under the section "XC8 Linker" and the Option Category "Additional Options". This is the relevant section of the Errata.
  23. I recently got my hands on a brand new PIC18F47K40 Xpress board (I ordered one after we ran into the Errata bug here a couple of weeks ago). I wanted to start off with a simple "Hello World" app which would use the built-in CDC serial port, which is great for doing printf debugging with, and interacting with the board in general since it has no LED's and no display to let me know that anything is working, but immediately I got stuck. Combing the user's guide I could not find any mention of the CDC interface or which pins to configure to make it work, so I stared at the schematic and identified a hand full of candidates which I could try as the correct pins. Eventually I figured it out, the TX pin to send data to the PC is RB6 and the RX pin which will receive data from the PC is RB7, so if you are setting up the UART using MCC it should look like this: I created a very simple little terminal application to test the board with and thought this may be something useful for others who start with this board, especially since the standard MCC UART code falls afoul of the dreaded Errata we discussed here before caught me out again even on this simple little application. So remember to set the linker up to implement the workaround for the Errata like so -> The little program simply waits for a keypress (so you have time to set up your terminal application) and then (at 115200 BAUD) sends the welcome message "Hello World". After this the program will echo every character you type, but it will enclose it in a message so that you are sure you are not just fooled by the local echo in your terminal program! Here is the whole program, the serial port and other setup is all generated using MCC. void main(void) { // Initialize the device SYSTEM_Initialize(); // Wait for a keypress before we start EUSART1_Read(); // Say Hello printf("Hello World\r\n"); while (1) { printf("Received: '%c' \r\n", EUSART1_Read()); } } The full example project can be downloaded from here ADC_47K40.zip Note: I have set the project up to automatically copy the hex file onto the board, programming it, when I build the code. This is however set up for my MAC, if you are no a PC or linux you will probably see an error when building that it could not copy the file. If you want to set this up for your platform the instructions are in the users guide for the board. I include a screenshot of the page here for convenience.
  24. I am trying to do a simple test to measure the output of the DAC using the ADC. The datasheet for the DAC states very clearly that if you have enabled the DAC output pin this will override any pin output functions including the digital output (TRIS) the weak pull-ups and the digital input threshold circuit (so ANSEL behaves like it is set to 1): But MCC is generating a warning saying that my setup is incorrect. Since the output settings are overriden when the pin is a DAC output this warning should not be created when the pin is a DAC output which makes it behave as needed. Besides there should not be any problem measuring an output pin using the ADC when it is an output anyway? I do not understand why the warning is claiming that it "requires" this pin to be set as input. Perhaps it would be better if the warning said "this pin is being driven by the device, if you are trying to measure an external voltage this will interfere with your readings, you can avoid this by disabling the pin output drivers by making the pin an input" or something to that extent? Required Boilerplate: Component Version Device PIC18F74K40 MCC 3.75 MPLAB-X 5.10 Foundation Services 0.1.31 PIC18 Lib 1.76.0
×
×
  • Create New...