Jump to content
 

Leaderboard


Popular Content

Showing content with the highest reputation since 12/23/2018 in all areas

  1. 3 points
    This is a "must read" list for Embedded Software Engineers. If we missed one please let us know in the comments! Please make a contribution to help us improve this list by leaving a comment. We are particularly interested in books we missed when compiling the list. If you leave a comment and we agree it will be added promptly. Here is "The List" in short form conveniently made up as Amazon.com links and remember if you follow any of these links before shopping on Amazon they will make a contribution to help us support this site! Scroll down for a more detailed list with cover pictures. The C Programming Language, 2nd Edition Design Patterns: Elements of Reusable Object-Oriented Software Code Complete: A Practical Handbook of Software Construction, Second Edition Making Embedded Systems: Design Patterns for Great Software Software Estimation: Demystifying the Black Art (Developer Best Practices) The Art of Computer Programming, Volumes 1-4A Boxed Set The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition Refactoring: Improving the Design of Existing Code (2nd Edition) UML Distilled: A Brief Guide to the Standard Object Modeling Language (3rd Edition) Clean Code: A Handbook of Agile Software Craftsmanship Software Architecture in Practice: Software Architect Practice_c3 (SEI Series in Software Engineering) 97 Things Every Programmer Should Know: Collective Wisdom from the Experts Programming 32-bit Microcontrollers in C: Exploring the PIC32 (Embedded Technology) The Pragmatic Programmer: From Journeyman to Master Compilers: Principles, Techniques, and Tools (2nd Edition) Applied Cryptography: Protocols, Algorithms and Source Code in C Structure and Interpretation of Computer Programs - 2nd Edition Introduction to Algorithms, 3rd Edition Honorable Mentions. Books not quite worthy of "The List" but still important recommended reading. The C99 Standard, really, you should have read this already if you are going to program anything embedded! (PDF link to the draft) Zen and the Art of Motorcycle Maintenance: An Inquiry into Values Guide to the Software Engineering Body of Knowledge (SWEBOK(R)): Version 3.0 A Guide to the Project Management Body of Knowledge (PMBOK® Guide)–Sixth Edition Happy Reading! .tg td { font-family: Arial, sans-serif; font-size: 14px; padding: 10px 5px; overflow: hidden; word-break: normal; } .tg .tg-yqpd_img { width: 200px; height: 200px; } .tg-yqpd_img img { border: 1px solid #ddd; border-radius: 4px; padding: 5px; } .tg .tg-yqpd { border-color: #ffffff; text-align: left; vertical-align: top } 1. The C Programming Language, 2nd Edition This is our No1. must read book if you are going to be doing embedded programming. Written by Kerninghan and Ritchie, the inventors of the C language. Learn how the C language was designed to work and why. It is packed with numerous excercises to ensure you understand every concept. You really should keep this on your desk as a reference if you ever get stuck. 2. Design Patterns: Elements of Reusable Object-Oriented Software Design patterns is how we communicate as Software Engineers about architectural details. If a building architect said the building should be "Tuscan Style" this would mean a wealth of things to people on the project about shape, size, colors, building materials etc. Design patterns form a similar language for Software Engineers and is a crucial tool in your arsenal. This is the original book known as the Gang of Four book or GOF for short. A must read before you venture further into other design patterns books. 3. Code Complete: A Practical Handbook of Software Construction, Second Edition This is by far the best all-round book about software development. It covers all aspects of Software Engineering to some degree, it is very thorough and a must-read just to make sure you know what is out there. 4. Making Embedded Systems: Design Patterns for Great Software This is by far the best introductory book we have seen, but it has an equal amount of gems in there for experienced campaigners, especially in the later sections on optimization (Doing less with more) and math which covers floating point issues and precision. We love the section "How NOT to use interrupts", and the one on Bootloaders for example. 5. Software Estimation: Demystifying the Black Art (Developer Best Practices) This is just a brilliant book on software project management. What makes it great is how it covers 100% of the foundational theory on estimation and planning and also covers the personal side. We love the scripts and dialogs coaching you how to present your estimates to management in such a way that they will not force unreasonable deadlines upon your team. McConnell explains that the "Science of Estimation" is mathematically intensive, uses all kinds of really complex formulae and can get estimates in the order of 5%. He then explains that this book is NOT about the science, it is more about what he calls the "Art of Estimation" which will not get you to 5%, but it will be good enough for most projects to be managed. 6. The Art of Computer Programming, Volumes 1-4A Boxed Set Computer programming is based on a lot of science. Without a solid knowledge of data structures and algorithms programming a microcontroller system is like trying to do woodwork with your bare hands scratching away with your nails. You really have to cover these fundamentals and Knuth is the all time master on teaching these fundamentals. 7. The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition This is one of those books which is quoted so often you will quickly give away the fact that you are the only one in the room who has not read this. Don't be that guy! And remember, adding more people to a project when it is late will make it even later, and putting 9 women on the job cannot create a baby in 1 month! But seriously the best part of this book is probably the capter "The Surgical Team" which really explains beautifully the core principles SCRUM and small Agile teams are built on, written decades before the rest of us realized that Fred Brooks was right all along! 8. Refactoring: Improving the Design of Existing Code (2nd Edition) Martin Fowler is probably the greatest mind in Computer Science today and he does not get the credit he deserves for it. Read this book and you will find out first hand just how much we can learn from this guy. I am not kidding if I say that his Event Sourcing Architectural pattern is THE ONLY way to go for even moderately complex embedded systems. This book covers the fundamentals you need to be Agile, get your code out there quickly so you can test your requirements and get customer feedback and then apply this book to refactor your existing code in such a way that your architecture improves and you stay on the blue line (Design Stamina Hypothesis - Google it!). 9. UML Distilled: A Brief Guide to the Standard Object Modeling Language (3rd Edition) Another Martin Fowler book. Especially in Embedded Systems we see time and again that not enough design is happening. The old saying you have to solve the problem first and then write the code is not taught enough! This book will give you all the tools you need to create the sequence diagrams, deployment diagrams and static structure diagrams you need to communicate and "reason about the system" (yes that is indeed me quoting from "Software Architecture in Practice"). 10. Clean Code: A Handbook of Agile Software Craftsmanship Uncle Bob is just a legend when it comes to the tactics of writing sofware. We are a big fan of the SOLID principles and almost everything he covers in this book can make you a better coder. Also check out his website and training videos, most of them will teach you something new and they are all entertaining as hell. 11. Software Architecture in Practice: Software Architect Practice_c3 (SEI Series in Software Engineering) Those who were lucky enough to study computer science will already have this book as every Computer Science course worth it's salt uses this as the textbook for the Architecture course. We really love how this book enumerates and covers the pros and cons of the majority of high-level architectural patterns we use in computer systems today. 12. 97 Things Every Programmer Should Know: Collective Wisdom from the Experts I discoved Kevlin Henney only recently but I love the ideas he is teaching. Things like reminging us the software is written for people to read and understand, and concepts of signal to noise ratio's in code. He explains that spaces are indeed superior to tabs and why. This book is a great collection of almost 100 tactics you can apply on a daily basis to improve your code. If you want to stand on the shoulders of giants it is critical that you heed their advice and this is a great collection of expert advice. 13. Programming 32-bit Microcontrollers in C: Exploring the PIC32 (Embedded Technology) When it comes to the PIC32 there is no better way to discover how it works and how to program it than this book. The fact that he actually works for Microchip gives Lucio amazing depth of insight into how this device was designed to be used and what it's strengths and weakneses are. In fact if you want a book to learn about PIC microcontrollers we recommend you search for Lucio Di Jasio on Amazon and pick the one for your platform! 14. The Pragmatic Programmer: From Journeyman to Master This is another one of those classic books that keeps popping up on every "best programming books" list. This book covers loads of practical advice on how to make your code better in general. Ward Cunningham reviewed it and concluded that "The Pragmatic Programmer illustrates the best practices and major pitfalls of many different aspects of software development. Whether youre a new coder, an experienced programmer". We agree! 15. Compilers: Principles, Techniques, and Tools (2nd Edition) Ok, we know that if you want to learn how to write a compiler today there are better texts than this, but this is still the book every compiler designer recommends at some point. What I love about this book is that it explains pretty early on what the compilation process looks like, which leads to understanding the reasons behind why compilers do things that can seem silly but are actually essential to produce working code consistently. It is always going to help you be a better programmer if you have at least a rundementary understanding of how compilers and linkers work, and this is a great place to start. 16.Applied Cryptography: Protocols, Algorithms and Source Code in C Security is getting more and more inportant in our connected world. Don't even try to do any security yourself unless you have read this book cover to cover. I am serious - don't! This is really the best place to learn the basic fundamentals about information security, Schneier is not only a world-renowned expert on the topic, but he has a talent for explaining an extremely complex topic in a truly accessible way. 17. Structure and Interpretation of Computer Programs - 2nd Edition Brought to you by the "MIT Electrical Engineering and Computer Science" team this is a fantastic book about the science of programming. If you are a "Tinkerer" who is happily surprized when your code runs and leaves comments like "no idea why this works" in your code this is probably not going to be for you. If you want to write robust code in a systematic way using Science and Engineering this will be a must read. If you tought yourself how to program and only knows Imperative programming then this book will go a ling way to filling in your blind spots, introducing you to a wealth of knowledge you never knew was out there. 18. Introduction to Algorithms, 3rd Edition Excellent textbook on algorithms, covering subjects from the basics like big "O" notation to advanced Boas trees and multithreaded algorithms. This book is used as textbook for the algorithms classes at universities like MIT, CMU, Stanford and Yale. Please help us improve this list by posting feedback in the comments below. Let us know if new editions are published, links are dead etc.
  2. 3 points
    I am doing some work with combinatorial optimizers. It is amazing what happens when you turn over one more rock and see what scurries out. There is a whole class of programming called declarative programming and I have worked with Haskel enough to be slightly familiar with the concepts. I just learned about flat zinc and an easier environment called MiniZinc which are completely declarative and can be used to solve optimization problems by describing the constraints a valid solution fits inside. So here is a quick example of a program to find the smallest area rectangle where the area is 10 times the circumference. var 1..1000: side1; var 1..1000: side2; var float: area; var float: circumference; constraint area = side1 * side2; constraint circumference = 2 * side1 + 2 * side2; constraint area = 10*circumference; solve minimize area; output ["side1 = \(side1)\nside2 = \(side2)\narea = \(area)\ncircumference = \(circumference)\n"]; and here is the output showing every iteration. side1 = 420 side2 = 21 area = 8820.0 circumference = 882.0 ---------- side1 = 220 side2 = 22 area = 4840.0 circumference = 484.0 ---------- side1 = 120 side2 = 24 area = 2880.0 circumference = 288.0 ---------- side1 = 100 side2 = 25 area = 2500.0 circumference = 250.0 ---------- side1 = 70 side2 = 28 area = 1960.0 circumference = 196.0 ---------- side1 = 60 side2 = 30 area = 1800.0 circumference = 180.0 ---------- side1 = 45 side2 = 36 area = 1620.0 circumference = 162.0 ---------- side1 = 40 side2 = 40 area = 1600.0 circumference = 160.0 ---------- ========== Finished in 82msec Obviously this is a trivial example but it turns out there is quite a bit of research and libraries in this field. For example the google OR-Tools which could be incorporated in your C code. If you need to optimize something and you can describe what the answer looks like (the constraints) then these tools are pretty good. Of course these problems are NP-Complete, so solutions can take some time. Good Luck.
  3. 3 points
    When comparing CPU's and architectures it is also a good idea to compare the frameworks and learn how the framework will affect your system. In this article I will be comparing a number of popular Arduino compatible systems to see how different "flavors" of Arduino stack up in the pin toggling test. When I started this effort, I thought it would be a straight forward demonstration of CPU efficiency, clock speed and compiler performance on the one side against the Arduino framework implementation on the other. As is often the case, if you poke deeply into even the most trivial of systems you will always find something to learn. As I look around my board stash I see that there are the following Arduino compatible development kits: Arduino Nano Every (ATMega 4809 @ 20MHz AVR Mega) Mini Nano V3.0 (ATMega 328P @ 16MHz AVR) RobotDyn SAMD21 M0-Mini (ATSAMD21G18A @ 48MHz Cortex M0+) ESP-12E NodeMCU (ESP8266 @ 80MHz Tenselica) Teensy 3.2 (MK20DX256VLH7 @ 96MHz Cortex M4) ESP32-WROOM-32 (ESP32 @ 240MHz Tenselica) And each of these kits has an available Arduino framework. Say what you will about the Arduino framework, there are some serious advantages to using it and a few surprises. For the purpose of this testing I will be running one program on every board. I will use vanilla "Arduino" code and make zero changes for each CPU. The Arduino framework is very useful for normalizing the API to the hardware in a very consistent and portable manner. This is mostly true at the low levels like timers, PWM and digital I/O, but it is very true as you move to higher layers like the String library or WiFi. Strangely, there are no promises of performance. For instance, every Arduino program has a setup() function where you put your initialization and a loop() function that is called very often. With this in mind it is easy to imagine the following implementation: extern void setup(void); extern void loop(void); void main(void) { setup(); while(1) { loop(); } } And in fact when you dig into the AVR framework you find the following code in main.cpp int main(void) { init(); initVariant(); #if defined(USBCON) USBDevice.attach(); #endif setup(); for (;;) { loop(); if (serialEventRun) serialEventRun(); } return 0; } There are a few "surprises" that really should not be surprises. First, the Arduino environment needs to be initialized (init()), then the HW variant (initVariant()), then we might be using a usb device so get USB started (USBDevice.attach()) and finally, the user setup() function. Once we start our infinite loop. Between calls to the loop function the code maintains the serial connection which could be USB. I suppose that other frameworks could implement this environment a little bit differently and there could be significant consequences to these choices. The Test For this test I am simply going to initialize 1 pin and then set it high and low. Here is the code. void setup() { pinMode(2,OUTPUT); } void loop() { digitalWrite(2,HIGH); digitalWrite(2,LOW); } I am expecting this to make a short high pulse and a slightly longer low pulse. The longer low pulse is to account for the extra overhead of looping back. This is not likely to be as fast as the pin toggles Orunmila did in the previous article but I do expect it to be about half as fast. Here are the results. The 2 red lines at the bottom are the best case optimized raw speed from Orunmila's comparison. That is a pretty interesting chart and if we simply compare the data from the ATMEGA 4809 both with ASM and Arduino code, you see a 6x difference in performance. Let us look at the details and we will summarize at the end. Nano 328P So here is the first victim. The venerable AVR AT328P running 16MHz. The high pulse is 3.186uS while the low pulse is 3.544uS making a pulse frequency of 148.2kHz. Clearly the high and low pulses are nearly the same so the extra check to handle the serial ports is not very expensive but the digitalWrite abstraction is much more expensive that I was anticipating. Nano Every The Nano Every uses the much newer ATMega 4809 at 20Mhz. The 4809 is a different variant of the AVR CPU with some additional optimizations like set and clear registers for the ports. This should be much faster. The high pulse is 1.192uS and the low pulse is 1.504uS. Again the pulses are almost the same size so the additional overhead outside of the loop function must be fairly small. Perhaps it is the same serial port test. Interestingly, one of the limiting factors of popular Arduino 3d printer controller projects such as GRBL is the pin toggle rate for driving the stepper motor pulses. A 4809 based controller could be 2x faster for the same stepper code. Sam D21 Mini M0 Now we are stepping up to an ARM Cortex M0 at 48Mhz. I actually expect this to be nearly 2x performance as the 4809 simply because the instructions required to set pins high and low should be essentially the same. Wow! I was definitely NOT expecting the timing to get worse than the 4809. The high pulse width is 1.478uS and the low pulse width is 1.916uS making the frequency 294.6kHz. Obviously toggling pins is not a great measurement of CPU performance but if you need fast pin toggling in the Arduino world, perhaps the SAMD21 is not your best choice. Teensy 3.2 This is a NXP Cortex M4 CPU at 96 MHz. This CPU is double the clock speed as the D21 and it is a M4 CPU which has lots of great features, though those features may not help toggle pins quickly. Interesting. Clearly this device is very fast as shown by the short high period of only 0.352uS. But, this framework must be doing quite a lot of work behind the scenes to justify the 2.274uS of loop delay. Looking a little more closely I see a number of board options for this hardware. First, I see that I can disable the USB. Surely the USB is supported between calls to the loop function. I also see a number of compiler optimization options. If I turn off the USB and select the "fastest" optimizations, what is the result? Teensy 3.2, No USB and Fastest optimizations Making these two changes and re-running the same C code produces this result: That is much better. It is interesting to see the compiler change is about 3x faster for this test (measured on the high pulse) and the lack of USB saves about 1uS in the loop rate. This is not a definitive test of the optimizations and probably the code grew a bit, but it is a stark reminder that optimization choices can make a big difference. ESP8266 The ESP8266 is a 32-bit Tenselica CPU. This is still a load/store architecture so its performance will largely match ARM though undoubtedly there are cases where it will be a bit different. The 8266 runs at 80Mhz so I do expect the performance to be similar to the Teensy 3.2. The wildcard is the 8266 framework is intended to support WiFI so it is running FreeRTOS and the Arduino loop is just one thread in the system. I have no idea what that will do to our pin toggle so it is time to measure. Interesting. It is actually quite slow and clearly there is quite a bit of system house-keeping happening in the main loop. The high pulse is only 0.948uS so that is very similar to Nano Every at 1/4th the clock speed. The low pulse is simply slow. This does seem to be a good device for IoT but not for pin toggling. ESP32 The ESP32 is a dual core very fast machine, but it does run the code out of a cache. This is because the code is stored in a serial memory. Of course our test is quite short so perhaps we do not need to fear the cache miss. Like the ESP8266, the Arduino framework is built upon a FreeRTOS task. But this has a second CPU and lots more clock speed so lets look at the results: Interesting, the toggle rate is about 2x the Teensy while the clock speed is about 3x. I do like how the pulses are nearly symmetrical. A quick peek at the source code for the framework shows the Arduino running as a thread but the thread updates the watchdog timer and the serial drivers on each pass through the loop. Conclusions It is very educational to make measurements instead of assumptions when evaluating an MCU for your next project. A specific CPU may have fantastic specifications and even demonstrations but it is critical to include the complete development system and code framework in your evaluation. It is a big surprise to find the 16MHz AVR328P can actually toggle a pin faster than the ESP8266 when used in a basic Arduino project. The summary graph at the top of the article is duplicated here: In this graph, the Pin Toggling Speed is actually only 1/(the high period). This was done on purpose so only the pin toggle efficiency is being compared. In the test program, the low period is where the loop() function ends and other housekeeping work can take place. If we want to compare the CPU/CODE efficiency, we should really normalize the pin toggling frequency to a common clock speed. We can always compensate for inefficiency with more clock speed. This graph is produced by dividing the frequency by the clock speed and now we can compare the relative efficiencies. That Cortex M4 and its framework in the Teensy 3.2 is quite impressive now. Clearly the ESP-32 is pretty good but using its clock speed for the win. The Mega 4809 has a reasonable framework just not enough clock speed. All that aside, the ASM versions (or even a faster framework) could seriously improve all of these numbers. The poor ESP8266 is pretty dismal. So what is happening in the digitalWrite() function that is making this performance so slow? Put another way, what am I getting in return for the low performance? There are really 3 reasons for the performance. Portability. Each device has work to adapt to the pin interface so the price of portability is runtime efficiency Framework Support. There are many functions in the framework that could be affected by the writing to the pins so the digitalWrite function must modify other functions. Application Ignorance. The framework (and this function) cannot know how the system is constructed so they must plan for the worst. Let us look at the digitalWrite for the the AVR void digitalWrite(uint8_t pin, uint8_t val) { uint8_t timer = digitalPinToTimer(pin); uint8_t bit = digitalPinToBitMask(pin); uint8_t port = digitalPinToPort(pin); volatile uint8_t *out; if (port == NOT_A_PIN) return; // If the pin that support PWM output, we need to turn it off // before doing a digital write. if (timer != NOT_ON_TIMER) turnOffPWM(timer); out = portOutputRegister(port); uint8_t oldSREG = SREG; cli(); if (val == LOW) { *out &= ~bit; } else { *out |= bit; } SREG = oldSREG; } Note the first thing is a few lookup functions to determine the timer, port and bit described by the pin number. These lookups can be quite fast but they do cost a few cycles. Next we ensure we have a valid pin and turn off any PWM that may be active on that pin. This is just safe programming and framework support. Next we figure out the output register for the update, turn off the interrupts (saving the interrupt state) set or clear the pin and restore interrupts. If we knew we were not using PWM (like this application) we could omit the turnOffPWM function. If we knew all of our pins were valid we could remove the NOT_A_PIN test. Unfortunately all of these optimizations require knowledge of the application which the framework cannot know. Clearly we need new tools to describe embedded applications. This has been a fun bit of testing. I look forward to your comments and suggestions for future toe-to-toe challenges. Good Luck and go make some measurements. PS: I realize that this pin toggling example is simplistic at best. There are some fine Arduino libraries and peripherals that could easily toggle pins much faster than the results shown here. However, this is a simple Apples to Apples test of identical code in "identical" frameworks on different CPU's so the comparisons are valid and useful. That said, if you have any suggestions feel free to enlighten us in the comments.
  4. 3 points
    https://amzn.to/2Vibb9c After posting the negative review on the other book here I realized that it is not much help unless you provide an alternative! A couple of years ago I stumbled upon this book by Elicia White. Ever since I have recommended it as a must read to every new member of my team, even if they had years of experience they always reported back that they learned something valuable from reading it. I stumbled upon this book looking for something on Design Patterns in Embedded Systems, and in terms of that this was not what I was looking for, there is barely a mention of design patterns in the book, but I was pleasantly surprised by what I did find. I like where the book starts of, explaining the value of Design and Architecture and why this is where you should start with your project. She moves on to basic I/O and Timers which I think goes together pretty well, but importantly she covers the important use cases and patterns quite nicely and points out all of the most common pitfalls people fall into. The next chapter, “Making the Flow of Activity” covers the main paradigms for Embedded Systems like superloop and event driven approaches and even covers table driven state machines and even interrupts, I particularly liked the section called “How NOT to use interrupts”. Next chapter “Doing more with less” was a pretty good introduction to the methods you have to learn to tell how much RAM and FLASH you are using, and she covers important concepts like not using malloc. The chapter on Math is sure to teach even experienced engineers a couple of new tricks and the last chapter on power consumption is practical and well done. Overall I felt like this was a great book for beginners and a pretty good recap even for experienced engineers who will no doubt also learn a couple of new tricks after going through this book.
  5. 2 points
    Structures in the C Programming Language Structures in C is one of the most misunderstood concepts. We see a lot of questions about the use of structs, often simply about the syntax and portability. I want to explore both of these and look at some best practice use of structures in this post as well as some lesser known facts. Covering it all will be pretty long so I will start off with the basics, the syntax and some examples, then I will move on to some more advanced stuff. If you are an expert who came here for some more advanced material please jump ahead using the links supplied. Throughout I will refer to the C99 ANSI C standard often, which can be downloaded from the link in the references. If you are not using a C99 compiler some things like designated initializers may not be available. I will try to point out where something is not available in older complilers that only support C89 (also known as C90). C99 is supported in XC8 from v2.0 onwards. Advanced topics handled lower down Scope Designated Initializers Declaring Volatile and Const Bit-Fields Padding and Packing of structs and Alignment Deep and Shallow copy of structures Comparing Structs Basics A structure is a compound type in C which is known as an "aggregate type". Structures allows us to use sets of variables together like a single aggregate object. This allows us to pass groups of variables into functions, assign groups of variables to a destination location as a single statement and so forth. Structures are also very useful when serializing or de-serializing data over communication ports. If you are receiving a complex packet of data it is often possible to define a structure specifying the layout of the variables e.g. the IP protocol header structure, which allows more natural access to the members of the structure. Lastly structures can be used to create register maps, where a structure is aligned with CPU registers in such a way that you can access the registers through the corresponding structure members. The C language has only 2 aggregate types namely structures and arrays. A union is notably not considered an aggregate type as it can only have one member object (overlapping objects are not counted separately). [Section "6.5.2 Types" of C99] Syntax The basic syntax for defining a structure follows this pattern. struct [structure tag] { member definition; member definition; ... } [one or more structure variables]; As indicated by the square brackets both the structure tag (or name) and the structure variables are optional. This means that I can define a structure without giving it a name. You can also just define the layout of a structure without allocating any space for it at the same time. What is important to note here is that if you are going to use a structure type throughout your code the structure should be defined in a header file and the structure definition should then NOT include any variable definitions. If you do include the structure variable definition part in your header file this will result in a different variable with an identical name being created every time the header file is included! This kind of mistake is often masked by the fact that the compiler will co-locate these variables, but this kind of behavior can cause really hard to find bugs in your code, so never do that! Declare the layout of your structures in a header file and then create the instances of your variables in the C file they belong to. Use extern definitions if you want a variable to be accessible from multiple C files as usual. Let's look at some examples. Example1 - Declare an anonymous structure (no tag name) containing 2 integers, and create one instance of it. This means allocate storage space in RAM for one instance of this structure on the stack. struct { int i; int j; } myVariableName; This structure type does not have a name, so it is an anonymous struct, but we can access the variables via the variable name which is supplied. The structure type may not have a name but the variable does. When you declare a struct like this it is not possible to declare a function which will accept this type of structure by name. Example 2 - Declare a type of structure which we will use later in our code. Do not allocate any space for it. struct myStruct { int i; int j; }; If we declare a structure like this we can create instances or define variables of the struct type at a later stage as follows. (According to the standard "A declaration specifies the interpretation and attributes of a set of identifiers. A definition of an identifier is a declaration for that identifier that causes storage to be reserved for that object" - 6.7) struct myStruct myVariable1; struct myStruct myVariable2; Example 3 - Declare a type of structure and define a type for this struct. typedef struct myStruct { int i; int j; } myStruct_t; // Not to be confused by a variable declaration // typedef changes the syntax here - myStruct_t is part of the typedef, NOT the struct definition! // This is of course equivalent to struct myStruct { int i; int j; }; // Now if you placed a name here it would allocate a variable typedef struct myStruct myStruct_t; The distinction here is a constant source of confusion for developers, and this is one of many reasons why using typedef with structs is NOT ADVISED. I have added in the references a link to some archived conversations which appeared on usenet back in 2002. In these messages Linus Torvalds explains much better than I can why it is generally a very bad idea to use typedef with every struct you declare as has become a norm for so many programmers today. Don't be like them! In short typedef is used to achieve type abstraction in C, this means that the owner of a library can at a later time change the underlying type without telling users about it and everything will still work the same way. But if you are not using the typedef exactly for this purpose you end up abstracting, or hiding, something very important about the type. If you create a structure it is almost always better for the consumer to know that they are dealing with a structure and as such it is not safe to to comparisons like == to the struct and it is also not safe to copy the struct using = due to deep copy problems (later on I describe these). By letting the user of your structs know explicitly they are using structs when they are you will avoid a lot of really hard to track down bugs in the future. Listen to the experts! This all means that the BEST PRACTICE way to use structs is as follows, Example 4- How to declare a structure, instantiate a variable of this type and pass it into a function. This is the BEST PRACTICE way. struct point { // Declare a cartesian point data type int x; int y; }; void pointProcessor(struct point p) // Declare a function which takes struct point as parameter by value { int temp = p.x; ... // and the rest } void main(void) { // local variables struct point myPoint = {3,2}; // Allocate a point variable and initialize it at declaration. pointProcessor(myPoint); } As you can see we declare the struct and it is clear that we are defining a new structure which represents a point. Because we are using the structure correctly it is not necessary to call this point_struct or point_t because when we use the structure later it will be accompanied by the struct keyword which will make its nature perfectly clear every time it is used. When we use the struct as a parameter to a function we explicitly state that this is a struct being passed, this acts as a caution to the developers who see this that deep/shallow copies may be a problem here and need to be considered when modifying the struct or copying it. We also explicitly state this when a variable is declared, because when we allocate storage is the best time to consider structure members that are arrays or pointers to characters or something similar which we will discuss later under deep/shallow copies and also comparisons and assignments. Note that this example passes the structure to the function "By Value" which means that a copy of the entire structure is made on the parameter stack and this is passed into the function, so changing the parameter inside of the function will not affect the variable you are passing in, you will be changing only the temporary copy. Example 5 - HOW NOT TO DO IT! You will see lots of examples on the web to do it this way, it is not best practice, please do not do it this way! // This is an example of how NOT to do it // This does the same as example 4 above, but doing it this way abstracts the type in a bad way // This is what Linus Torvalds warns us against! typedef struct point_tag { // Declare a cartesian point data type int x; int y; } point_t; void pointProcessor(point_t p) { int temp = p.x; ... // and the rest } void main(void) { // local variables point_t myPoint = {3,2}; // Allocate a point variable and initialize it at declaration. pointProcessor(myPoint); } Of course now the tag name of the struct has no purpose as the only thing we ever use it for is to declare yet another type with another name, this is a source of endless confusion to new C programmers as you can imagine! The mistake here is that the typedef is used to hide the nature of the variable. Initializers As you saw above it is possible to assign initial values to the members of a struct at the time of definition of your variable. There are some interesting rules related to initializer lists which are worth pointing out. The standard requires that initializers be applied in the order that they are supplied, and that all members for which no initializer is supplied shall be initialized to 0. This applies to all aggregate types. This is all covered in the standard section 6.7.8. I will show a couple of examples to clear up common misconceptions here. Desctiptions are all in the comments. struct point { int x; int y; }; void function(void) { int myArray1[5]; // This array has random values because there is no initializer int myArray2[5] = { 0 }; // Has all its members initialized to 0 int myArray3[5] = { 5 }; // Has first element initialized to 5, all other elements to 0 int myArray3[5] = { }; // Has all its members initialized to 0 struct point p1; // x and y both indeterminate (random) values struct point p2 = {1, 2}; // x = 1 and y = 2 struct point p3 = { 1 }; // x = 1 and y = 0; // Code follows here } These rules about initializers are important when you decide in which order to declare your members of your structures. We saw a great example of how user interfaces can be simplified by placing members to be initialized to 0 at the end of the list of structure members when we looked at the examples of how to use RTCOUNTER in another blog post. More details on Initializers such as designated initializers and variable length arrays, which were introduced in C99, are discussed in the advanced section below. Assignment Structures can be assigned to a target variable just the same as any other variable. The result is the same as if you used the assignment operator on each member of the structure individually. In fact one of the enhancements of the "Enhanced Midrange" code in all PIC16F1xxx devices is the capability to do shallow copies of structures faster thought specialized instructions! struct point { // Declare a cartesian point data type int x; int y; }; void main(void) { struct point p1 = {4,2}; // p1 initialized though an initializer-list struct point p2 = p1; // p2 is initialized through assignment // At this point p2.x is equal to p1.x and so is p2.y equal to p1.y struct point p3; p3 = p2; // And now all three points have the same value } Be careful though, if your structure contains external references such as pointers you can get into trouble as explained later under Deep and Shallow copy of structures. Basic Limitations Before we move on to advanced topics. As you may have suspected there are some limitations to how much of each thing you can have in C. The C standard calls these limits Translation Limits. They are a requirement of the C standard specifying what the minimum capabilities of a compiler has to be to call itself compliant with the standard. This ensures that your code will compile on all compliant compilers as long as you do not exceed these limits. The Translation Limits applicable to structures are: External identifiers must use at most 31 significant characters. This means structure names or members of structures should not exceed 31 unique characters. At most 1023 members in a struct or union At most 63 levels of nested structure or union definitions in a single struct-declaration-list Advanced Topics Scope Structure, union, and enumeration tags have scope that begins just after the appearance of the tag in a type specifier that declares the tag. When you use typedef's however the type name only has scope after the type declaration is complete. This makes it tricky to define a structure which refers to itself when you use typedef's to define the type, something which is important to do if you want to construct something like a linked list. I regularly see people tripping themselves up with this because they are using the BAD way of using typedef's. Just one more reason not to do that! Here is an example. // Perfectly fine declaration which compiles as myList has scope inside the curly braces struct myList { struct myList* next; }; // This DOES NOT COMPILE ! // The reason is that myList_t only has scope after the curly brace when the type name is supplied. typedef struct myList { myList_t* next; } myList_t; As you can see above we can easily refer a member of the structure to a pointer of the structure itself when you stay away from typedef's, but how do you handle the more complex case of two separate structures referring to each other? In order to solve that one we have to make use of incomplete struct types. Below is an example of how this looks in practice. struct a; // Incomplete declaration of a struct b; // Incomplete declaration of b struct a { // Completing the declaration of a with member pointing to still incomplete b struct b * myB; }; struct b { // Completing the declaration of b with member pointing to now complete a struct a * myA; }; This is an interesting example from the standard on how scope is resolved. Designated Initializers (introduced in C99) Example 4 above used initializer-lists to initialize the members of our structure, but we were only able to omit members at the end, which limited us quite severely. If we could omit any member from the list, or rather include members by designation, we could supply the initializers we need and let the rest be set safely to 0. This was introduced in C99. This addition had a bigger impact on Unions however. There is a rule in a union which states that initializer-lists shall be applied solely to the first member of the union. It is easy to see why this was necessary, since the members of a each struct which comprizes a union do not have to be the same number of members, it would be impossible to apply a list of constants to an arbitraty member of the union. In many cases this means that designated initializers are the only way that unions can be initialized consistently. Examples with structs. struct multi { int x; int y; int a; int b; }; struct multi myVar = {.a = 5, .b = 3}; // Initialize the struct to { 0, 0, 5, 3 } Examples with a Union. struct point { int x; int y; }; struct circle { struct point center; int radius; }; struct line { struct point start; struct point end; }; union shape { struct circle mCircle; struct line mLine; }; void main(void) { volatile union shape shape1 = {.mLine = {{1,2}, {3,4}}}; // Initialize the union using the line member volatile union shape shape2 = {.mCircle = {{1,2}, 10}}; // Initialize the union using the circle member ... } The type of initialization of a union using the second member of the union was not possible before C99, which also means if you are trying to port C99 code to a C89 compiler this will require you to write initializer functions which are functionally different and your port may end up not working as expected. Initializers with designations can be combined with compound literals. Structure objects created using compound literals can be passed to functions without depending on member order. Here is an example. struct point { int x; int y; }; // Passing 2 anonymous structs into a function without declaring local variables drawline( (struct point){.x=1, .y=1}, (struct point){.y=3, .x=4}); Volatile and Const Structure declarations When declaring structures it is often necessary for us to make the structure volatile, this is especially important if you are going to overlay the structure onto registers (a register map) of the microprocessor. It is important to understand what happens to the members of the structure in terms of volatility depending on how we declare it. This is best explained using the examples from the C99 standard. struct s { // Struct declaration int i; const int ci; }; // Definitions struct s s; const struct s cs; volatile struct s vs; // The various members have the types: s.i // int s.ci // const int cs.i // const int cs.ci // const int vs.i // volatile int vs.ci // volatile const int Bit Fields It is possible to include in the declaration of a structure how many bits each member should occupy. This is known as "Bit Fields". It can be tricky to write portable code using bit-fields if you are not aware of their limitations. Firstly the standard states that "A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type." Further to this it also statest that "As specified in 6.7.2 above, if the actual type specifier used is int or a typedef-name defined as int, then it is implementation-defined whether the bit-field is signed or unsigned." This means effectively that unless you use _Bool or unsigned int your structure is not guaranteed to be portable to other compilers or platforms. The recommended way to declare portable and robust bitfields is as follows. struct bitFields { unsigned enable : 1; unsigned count : 3; unsigned mode : 4; }; When you use any of the members in an expression they will be promoted to a full sized unsigned int during the expression evaluation. When assigning back to the members values will be truncated to the allocated size. It is possible to use anonymous bitfields to pad out your structure so you do not need to use dummy names in a struct if you build a register map with some unimplemented bits. That would look like this: struct bitFields { unsigned int enable : 1; unsigned : 3; unsigned int mode : 4; }; This declares a variable which is at least 8 bits in size and has 3 padding bits between the members "enable" and "mode". The caveat here is that the standard does not specify how the bits have to be packed into the structure, and different systems do in fact pack bits in different orders (e.g. some may pack from LSB while others will pack from MSB first). This means that you should not rely on the postion of specific position of bits in your struct being in specific locations. All you can rely on is that in 2 structs of the same type the bits will be packed in corresponding locations. When you are dealing with communication systems and sending structures containing bitfields over the wire you may get a nasty surprize if bits are in a different order on the receiver side. And this also brings us to the next possible inconsitency - packing. This means that for all the syntactic sugar offered by bitfields, it is still more portable to use shifting and masking. By doing so you can select exactly where each bit will be packed, and on most compilers this will result in the same amount of code as using bitfields. Padding, Packing and Alignment This is going to be less applicable on a PIC16, but if you write portable code or work with larger processors this becomes very important. Typically padding will happen when you declare a structure that has members which are smaller than the fastest addressible unit of the processor. The standard allows the compiler to place padding, or unused space, in between your structure members to give you the fastest access in exchange for using more RAM. This is called "Alignment". On embedded applications RAM is usually in short supply so this is an important consideration. You will see e.g. on a 32-bit processor that the size of structures will increment in multiples of 4. The following example shows the definition of some structures and their sizes on a 32-bit processor (my i7 in this case running macOS). And yes it is a 64 bit machine but I am compiling for 32-bit here. // This struct will likely result in sizeof(iAmPadded) == 12 struct iAmPadded { char c; int i; char c2; } // This struct results in sizeof(iAmPadded) == 8 (on GCC on my i7 Mac) or it could be 12 depending on the compiler used. struct iAmPadded { char c; char c2; int i; } Many compilers/linkers will have settings with regards to "Packing" which can either be set globally. Packing will instruct the compiler to avoid padding in between the members of a structure if possible and can save a lot of memory. It is also critical to understand packing and padding if you are making register overlays or constructing packets to be sent over communication ports. If you are using GCC packing is going to look like this: // This struct on gcc on a 32-bit machine has sizeof(struct iAmPadded) == 6 struct __attribute__((__packed__)) iAmPadded { char c; int i; char c2; } // OR this has the same effect for GCC #pragma pack(1) struct __attribute__((__packed__)) iAmPadded { char c; int i; char c2; } If you are writing code on e.g. an AVR which uses GCC and you want to use the same library on your PIC32 or your Cortex-M0 32-bitter then you can instruct the compiler to pack your structures like this and save loads of RAM. Note that taking the address of structure members may result in problems on architectures which are not byte-addressible such as a SPARC. Also it is not allowed to take the address of a bitfield inside of a structure. One last note on the use of the sizeof operator. When applied to an operand that has structure or union type, the result is the total number of bytes in such an object, including internal and trailing padding. Deep and Shallow copy Another one of those areas where we see countless bugs. Making structures with standard integer and float types does not suffer from this problem, but when you start using pointers in your structures this can turn into a problem real fast. Generally it is perfectly fine to create copies of structures by passing them into functions or using the assignement operator "=". Example struct point { int a; int b; }; void function(void) { struct point point1 = {1,2}; struct point point2; point2 = point1; // This will copy all the members of point1 into point2 } Similarly when we call a function and pass in a struct a copy of the structure will be made into the parameter stack in the same way. When the structure however contains a pointer we must be careful because the process will copy the address stored in the pointer but not the data which the pointer is pointing to. When this happens you end up with 2 structures containing pointers pointing to the same data, which can cause some very strange behavior and hard to track down bugs. Such a copy, where only the pointers are copied is called a "shallow copy" of the structure. The alternative is to allocate memory for members being pointed to by the structure and create what is called a "deep copy" of the structure which is the safe way to do it. We probably see this with strings more often than with any type of pointer e.g. struct person { char* firstName; char* lastName; } // Function to read person name from serial port void getPerson(struct person* p); void f(void) { struct person myClient = {"John", "Doe"}; // The structure now points to the constant strings // Read the person data getPerson(&myClient) } // The intention of this function is to read 2 strings and assign the names of the struct person void getPerson(struct person* p) { char first[32]; char last[32]; Uart1_Read(first, 32); Uart1_Read(last, 32); p.firstName = first; p.lastName = last; } // The problem with this code is that it is easy for to look like it works. The probelm with this code is that it will very likely pass most tests you throw at it, but it is tragically broken. The 2 buffers, first and last, are allocated on the stack and when the function returns the memory is freed, but still contains the data you received. Until another function is called AND this function allocates auto variables on the stack the memory will reamain intact. This means at some later stage the structure will become invalid and you will not be able to understand how, if you call the function twice you will later find that both variables you passed in contain the same names. Always double check and be mindful where the pointers are pointing and what the lifetime of the memory allocated is. Be particularly careful with memory on the stack which is always short-lived. For a deep copy you would have to allocate new memory for the members of the structure that are pointers and copy the data from the source structure to the destination structure manually. Be particularly careful when structures are passed into a function by value as this makes a copy of the structure which points to the same data, so in this case if you re-allocate the pointers you are updating the copy and not the source structure! For this reason it is best to always pass structures by reference (function should take a pointer to a structure) and not by value. Besides if data is worth placing in a structure it is probably going to be bigger than a single pointer and passing the structure by reference would probably be much more efficient! Comparing Structs Although it is possible to asign structs using "=" it is NOT possible to compare structs using "==". The most common solution people go for is to use memcmp with sizeof(struct) to try and do the comparison. This is however not a safe way to compare structures and can lead to really hard to track down bugs! The problem is that structures can have padding as described above, and when structures are copied or initialized there is no obligation on the compiler to set the values of the locations that are just padding, so it is not safe to use memcmp to compare structures. Even if you use packing the structure may still have trailing padding after the data to meet alignment requirements. The only time using memcmp is going to be safe is if you used memset or calloc to clear out all of the memory yourself, but always be mindful of this caveat. Conclusion Structs are an important part of the C language and a powerful feature, but it is important that you ensure you fully understand all the intricacies involved in structs. There is as always a lot of bad advice and bad code out there in the wild wild west known as the internet so be careful when you find code in the wild, and just don't use typdef on your structs! References As always the WikiPedia page is a good resource Link to a PDF of the comittee draft which was later approved as the C99 standard Linus Torvalds explains why you should not use typedef everywhere for structs Good write-up on packing of structures Bit Fields discussed on Hackaday
  6. 2 points
    I am working on a project for my camping trailer. The trailer has a 200W solar panel charging a 100AH LiFe battery. This crazy amount of energy is used to run lights, a refrigerator and, more importantly, a laptop and telescope while I am in the field. There is the trailer stored in the yard. The grey metal on the front is a set of drawer slides holding the 200W 24v panel. The 24v works well for the MPPT's because I get charging voltage earlier in the morning. The 200W works well because I don't want to move the panels to track the sun while I am camping. Now I want a status panel for the entire trailer in my home office so I can see how prepared I am for camping at moments notice. When I bought the trailer I specified a custom electrical system. This system utilizes a lot of Victron Energy equipment because it works well and it has a serial port with a published specification. They even show off open source projects using their equipment. Here is the electrical box. It did get a bit tight. Each of the blue units is a device with serial data for monitoring. This box lives in the black equipment box on the trailer tongue. The serial port is called VE.Direct and it is a simple 1 second transmission of useful information at 19200 baud. Here is an example of a transmission recorded from the trailer. PID 0xA053 FW 146 SER# HQ1734RTXXT V 13500 I -180 VPV 39200 PPV 0 CS 5 MPPT 1 OR 0x00000000 ERR 0 LOAD ON IL 200 H19 2436 H20 21 H21 106 H22 27 H23 132 HSDS 117 Checksum $ I decided that step 1 in my project is to collect the data from the 3 devices in the box and report them to a second human interface/radio link in the back of the trailer. Here is my plan: This indicates that I need an MCU that has 5 serial ports and is easily programmed for my Hobby. I have been doing quite a bit of Arduino work in Platform IO and that combination is pretty good so how about using one of the many SAMD21 based Arduino kits. The SAMD21 has 6 SERCOM's that can be used for this purpose. So the first step... How easy is it to use 5 of the SERCOM's to do what I want? Not too bad but there are a few gotcha's. The SAMD21 Arduino support has a file called variant.cpp that defines a big array of structures to cover the capabilities of each pin. This structure includes a "default configuration" setting such as SERCOM, Analog, Digital, Timer, etc... When you open the serial port (Serial.begin(19200)) the serial library puts the pins into the "default" behavior and not the SERCOM behavior. Therefore if you want to use multiple sercom's you must do a few steps. 1) Ensure your SERCOM is initialized with the correct I/O pins. Uart ve1(&sercom2, sc2_rx, sc2_tx, SERCOM_RX_PAD_3, UART_TX_PAD_2); Uart ve2(&sercom3, sc3_rx, sc3_tx, SERCOM_RX_PAD_1, UART_TX_PAD_0); Uart ve3(&sercom1, sc1_rx, sc1_tx, SERCOM_RX_PAD_3, UART_TX_PAD_2); Uart ve4(&sercom4, sc4_rx, sc4_tx, SERCOM_RX_PAD_1, UART_TX_PAD_0); The tricky bit here is mapping the Arduino PIN number, to the SAMD21 Port/PIN numbers so you can determine which portmux settings are applicable. This is helpfully documented in the variant file but it is tedious especially since I build a custom PCB and had to keep track of every abstraction layer to ensure I hooked it up correctly. 2) Start the UARTS. ve1.begin(19200); ve2.begin(19200); ve3.begin(19200); ve4.begin(19200); This step was VERY simple and straight forward. 3) Put the new pins in the correct mode. pinPeripheral(sc1_tx, PIO_SERCOM); pinPeripheral(sc1_rx, PIO_SERCOM); pinPeripheral(sc2_tx, PIO_SERCOM); pinPeripheral(sc2_rx, PIO_SERCOM); pinPeripheral(sc3_tx, PIO_SERCOM); pinPeripheral(sc3_rx, PIO_SERCOM); pinPeripheral(sc4_tx, PIO_SERCOM_ALT); pinPeripheral(sc4_rx, PIO_SERCOM_ALT); The begin() function put each of these pins in their DEFAULT mode according to the variant table. But most of these pins did not have PIO_SERCOM as their default mode. You must call pinPeripheral() after you call begin() OR change the variant table. Naturally, my initial code did not but things did not work. Since I am working on this project off and on for some time, I found the bug in my VE.Direct class! The constructor was issuing the begin() which does make sense but it breaks the UART configuration that was already up and running. Now my serial ports are all running, and I am playing recorded messages to the system so I can debug inside where it is not 40C. On to the next problem. Good Luck
  7. 2 points
    I just received a care package from my father with a pile of old "junk". In there was a Speak and Spell from around 1978, I don't know when mine was bought. I quickly checked the battery compartment (4 C-cells) fearing to find a pile of corrosion. I did, but it was the rusty sort. The battery contacts were rusted. I opened the unit, removed the contacts and dropped them in vinegar to dissolve the rust. The contacts completely fell apart. A quick Amazon order for Keystone 209's and I was back in business. The new clips seem to be slightly thicker or perhaps stiffer, as the batteries are more difficult to insert. BUT it works! While I was waiting for the 209's to arrive, I considered adding an 18650 battery and a USB battery charger. The old TMS5100 series electronics runs from -15v so there is an inverting boost regulator to drop the 6v down to the -15. I tested and this works OK from 5v which is the standard output from the variety of battery manager/chargers you can find. But in the end I did not want a second button to activate the USB battery and then the normal ON button to activate the device. In any case, my 5 year old son loves it, though he did ask "can we install more games".
  8. 2 points
    Comparing raw pin toggling speed AVR ATmega4808 vs PIC 16F15376 Toggling a pin is such a basic thing. After all, we all start with that Blinky program when we bring up a new board. It is actually a pretty effective way to compare raw processing speed between any two microcontrollers. In this, our first Toe-to-toe showdown, we will be comparing how fast these cores can toggle a pin using just a while loop and a classic XOR toggle. First, let's take a look at the 2 boards we used to compare these cores. These were selected solely because I had them both lying on my desk at the time. Since we are not doing anything more than toggling a pin we just needed an 8-bit AVR core and an 8-bit PIC16F1 core on any device to compare. I do like these two development boards though, so here are the details if you want to repeat this experiment. In the blue corner, we have the AVR, represented by the ATmega4808, sporting an AtMega core (AVRxt in the instruction manual) Clocking at a maximum of 20MHz. We used the AVR-IOT WG Development Board, part number AC164160. This board can be obtained for $29 here: https://www.microchip.com/Developmenttools/ProductDetails/AC164160 Compiler: XC8 v2.05 (Free) In the red corner, we have the PIC, represented by the 16F15376, sporting a PIC16F1 Enhanced Midrange core. Clocking at a maximum of 32MHz. We used the MPLAB® Xpress PIC16F15376 Evaluation Board, part number DM164143. This board can be obtained at $12 here: https://www.microchip.com/developmenttools/ProductDetails/DM164143 Compiler: XC8 v2.05 (Free) Results This is what we measured. All the details around the methodology we used and an analysis of the code follows below and attached you will find all the source code we used if you want to try this at home. The numbers in the graph are pin toggling frequency in kHz after it has been normalized to a 1MHz CPU clock speed. How we did it (and some details about the cores) Doing objective comparisons between 2 very different cores is always hard. We wanted to make sure that we do an objective comparison between the cores which you can use to make informed decisions on your project. In order to do this, we had to deal with the fact that the maximum clock speed of these devices is not the same and also that the fundamental architecture of these two cores is very different. In principle, the AVR is a Load-store Architecture machine with a 1 stage pipeline. This basically means that all ALU operations have to be performed between CPU registers and the RAM is used to load from and store results to. The PIC, on the other hand, uses a Register Memory Architecture, which means in short that some ALU operations can be performed on RAM locations directly and that the machine has a much smaller set of registers. On the PIC all instructions are 1 word in length, which is 14-bits wide, while the data bus is 8-bits in size and all results will be a maximum of 8-bits in size. On the AVR instructions can be 16-bit or 32-bit wide which results in different execution times depending on the instruction. Both processors have a 1 stage pipeline, which means that the next instruction is fetched while the current one is being executed. This means branching causes an incorrect fetch and results in a penalty of one instruction cycle. One major difference is that the AVR, due to its Load-store Architecture, is capable of completing the instruction within as little as just one clock cycle. When instructions need to use the data bus they can take up to 5 clock cycles to execute. Since the PIC has to transfer data over the bus it takes multiple cycles to execute an instruction. In keeping with the RISC paradigm of highly regular instruction pipeline flow, all instructions on the PIC take 4 clock cycles to execute. All of this just makes it tricky and technical to compare performance between these processors. What we decided to do is rather take typical tasks we need the CPU to perform which occurs regularly in real programs and simply measure how fast each CPU can perform these tasks. This should allow you to work backwards from what your application will be doing during maximum throughput pressure on the CPU and figure out which CPU will perform the best for your specific problem. Round 1: Basic test For the first test, we used virtually the same code on both processors. Since both of these are supported by MCC it was really easy to get going. We created a blank project for the target CPU Fired up MCC Adjusted the clock speed to the maximum possible Clicked in the pin manager to make a single pin on PORTC an output Hit generate code. After this all we added was the following simple while loop: PIC AVR while (1) { LATC ^= 0xFF; } while (1) { PORTC.OUT ^= 0xFF; } The resulting code produced by the free compilers (XC8 v2.05 in both cases) was as follows, interestingly enough both loops had the same number of instructions (6 in total) including the loop jump. This is especially interesting as it will show how the execution of a same-length loop takes on each of these processors. You will notice that without optimization there is some room for improvement, but since this is how people will evaluate the cores at first glance we wanted to go with this. PIC AVR .tg {border-collapse:collapse;border-spacing:0;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} .tg .tg-0lax{text-align:left;vertical-align:top} Address Hex Instruction 07B3 30FF MOVLW 0xFF 07B4 00F0 MOVWF __pcstackCOMMON 07B5 0870 MOVF __pcstackCOMMON, W 07B6 0140 MOVLB 0x0 07B7 069A XORWF LATC, F 07B8 2FB3 GOTO 0x7B3 Address Hex Instruction 017D 9180 LDS R24, 0x00 017E 0444 017F 9580 COM R24 0180 9380 STS 0x00, R24 0181 0444 0182 CFFA RJMP 0x17D We used my Saleae logic analyzer to capture the signal and measure the timing on both devices. Since the Saleae is thresholding the digital signal and the rise and fall times are not always identical you will notice a little skew in the measurements. We did run everything 512x slower to confirm that this was entirely measurement error, so it is correct to round all times to multiples of the CPU clock in all cases here. PIC AVR Analysis For the PIC The clock speed was 32MHz. We know that the PIC takes 4 clock cycles to execute one instruction, which gives us an expected instruction rate of one instruction every 125ns. Rounding for measurement errors we see that the PIC has equal low and high times of 875ns. That is 7 instruction cycles for each loop iteration. To verify if this makes sense we can look at the ASM. We see 6 instructions, the last of which is a GOTO, which we know will take 2 instruction cycles to execute. Using that fact we can verify that the loop repeats every 7 instruction cycles as expected (7 x 125ns = 875ns.) For the AVR The clock speed was 20MHz. We know that the AVR takes 1 clock cycle per instruction, which gives us an expected instruction rate of one instruction every 50ns. Rounding for measurement errors we see that the AVR has equal low and high times of 400ns. That is 8 instruction cycles for each loop iteration. To verify if this makes sense we again look at the ASM. We see 4 instructions, the last of which is an RJMP, which we know will take 2 instruction cycles to execute. We also see one LDS which takes 3 cycles because it is accessing sram, and one STS instruction which will each take 2 cycles and a Complement instruction which takes 1 more. Using those facts we can verify that the loop should repeat every 8 instruction cycles as expected (8 x 50ns = 400ns.) Comparison Since the 2 processors are not running at the same clock speed we need to do some math to get a fair comparison. We think 2 particular approaches would be reasonable. Compare the raw fastest speed the CPU can do, this gives a fair benchmark where CPU's with higher clock speeds get an advantage. Normalize the results to a common clock speed, this gives us a fair comparison of capability at the same clock speed. In the numbers below we used both methods for comparison. The numbers AVR PIC Notes Clock Speed 20MHz 32MHz Loop Speed 400ns 875ns Maximum Speed 2.5Mhz 1.142MHz Loop speed as a toggle frequency Normalized Speed 125kHz 35.7kHz Loop frequency normalized to a 1MHz CPU clock ASM Instructions 4 6 Loop Code Size 12 bytes 12 bytes 4 instructions 6 words 10.5 bytes 6 instructions Due to the nuances here we compared this 3 ways Total Code Size 786 bytes 101 words 176.75 bytes Round 2: Expert Optimized test For the second round, we tried to hand-optimize the code to squeeze out the best possible performance from each processor. After all, we do not want to just compare how well the compilers are optimizing, we want to see what is the absolute best the raw CPU's can achieve. You will notice that although optimization doubled our performance, it made little difference to the relative performance between the two processors. For the PIC we wrote to LATC to ensure we are in the right bank, and pre-set the W register, this means the loop reduces to just a XORF and a GOTO. For the AVR we changed the code to use the Toggle register instead doing an XOR of the OUT register for the port. The optimized code looked as follows. PIC AVR LATC = 0xFF; asm ("MOVLW 0xFF"); while (1) { asm ("XORWF LATC, F"); } asm ("LDI R30,0x40"); asm ("LDI R31,0x04"); asm ("SER R24"); while (1){ asm ("STD Z+7,R24"); } The resulting ASM code after these changes now looked as follows. Note we did not include the instructions outside of the loop here as we are really just looking at the loop execution. PIC AVR .tg {border-collapse:collapse;border-spacing:0;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;} .tg .tg-0lax{text-align:left;vertical-align:top} Address Hex Instruction 07C1 069A XORWF LATC, F 07C2 00F0 GOTO 0x7C1 Address Hex Instruction 0180 8387 STD Z+7,R24 0181 CFFE RJMP 0x180 Here are the actual measurements: PIC AVR Analysis For the PIC we do not see how we could improve on this as the loop has to be a GOTO which takes 2 cycles and 1 instruction is the least amount of work we could possibly do in the loop so we are pretty confident that this is the best we can do, and when measuring we see 3 instruction cycles which we think is the limit here. Note: N9WXU did suggest that we could fill all the memory with XOR instructions and let it loop around forever and in doing so save the GOTO, but we would still have to set W to FF every second instruction to have consistent timing, so this would still be 2 instructions per "loop" although it would use all the FLASH and execute in 250ns. Novel as this idea was, since that means you can do nothing else we dismissed that idea as not representative. For the AVR we think we are also at the limits here. The toggle register lets us toggle the pin in 1 clock cycle which cannot be beaten, and the RJMP unavoidably adds 2 more. We measure 3 cycles for this. AVR PIC Notes Clock Speed 20MHz 32MHz Loop Speed 150ns 375ns Maximum Speed 6.667Mhz 2.667MHz Loop speed as a toggle frequency Normalized Speed 333.3kHz 83.3kHz Loop frequency normalized to a 1MHz CPU clock ASM Instructions 2 2 Loop Code Size 4 bytes 4 bytes 2 words 3.5 bytes At this point, we can do a raw comparison of absolute toggle frequency performance after the hand optimization. Comparing this way gives the PIC the advantage of running at 32MHz while the AVR is limited to 20MHz. Interestingly the PIC gains a little as expected, but the overall picture does not change much. The source code can be downloaded here: PIC MPLAB-X Project file MicroforumToggleTestPic16f1.zip AVR MPLAB-X Project file MicroforumToggleTest4808.zip What next? For our next installment, we have a number of options. We could add more cores/processors to this test of ours, or we can take a different task and cycle through the candidates on that. We could also vary the tools by using different compilers and see how they stack up against each other and across the architectures. Since our benchmarks will all be based on real-world tasks it should not matter HOW the CPU is performing the task or HOW we created the code, the comparison will simply be how well the job gets done. Please do post any ideas or requests in the comments and we will see if we can either improve this one or oblige with another Toe-to-toe comparison. Updates: This post was updated to use the 1 cycle STD instruction instead of the 2 cycle STS instruction for the hand-optimized AVR version in round 2
  9. 2 points
    I am building an integrated audio interface for a Baofeng UV-5R hand-held radio. Primarily this is to get a packet radio APRS system up and running. Traditionally, this seems to be accomplished with a crazy collection of adapters and hand crafted interface cables. As I was looking for a better solution, I discovered the Teensy family of ARM microcontrollers. (actually Arduino high performance ARM's built on NXP Cortex M4's) The important part is actually the library support. Out of the box I was able to get a USB Audio device up and map the ADC and DAC to the input and output. This could all be configured with some simple "patch cord" wiring. So I created the "program" above. This combines the 2 audio channels from USB into a single DAC and also passes the data to an RMS block. This causes the audio to play on the DAC at 44.1khz. It also keeps a running RMS value available for your own code. More on this later. Next the ADC data is duplicated into both channels back through USB and on to the PC (Raspberry Pi). Again an RMS block is present here as well. Now pressing EXPORT produces this little block of "code" Which gets pasted into the Arduino IDE at the beginning of the program (before your setup & main. /* * A simple hardware test which receives audio on the A2 analog pin * and sends it to the PWM (pin 3) output and DAC (A14 pin) output. * * This example code is in the public domain. */ #include <Audio.h> #include <Wire.h> #include <SPI.h> #include <SD.h> #include <SerialFlash.h> #include <Audio.h> #include <Wire.h> #include <SPI.h> #include <SD.h> #include <SerialFlash.h> // GUItool: begin automatically generated code AudioInputUSB usb1; //xy=91,73.00001907348633 AudioInputAnalog adc1; //xy=153.00000381469727,215.00003242492676 AudioMixer4 mixer1; //xy=257.00000762939453,72.00002670288086 AudioAnalyzeRMS rms2; //xy=427.0000114440918,266.000036239624 AudioOutputUSB usb2; //xy=430.00001525878906,217.00003242492676 AudioOutputAnalog dac1; //xy=498.00009536743164,72.00002670288086 AudioAnalyzeRMS rms1; //xy=498.00001525878906,129.0000295639038 AudioConnection patchCord1(usb1, 0, mixer1, 0); AudioConnection patchCord2(usb1, 1, mixer1, 1); AudioConnection patchCord3(adc1, 0, usb2, 0); AudioConnection patchCord4(adc1, 0, usb2, 1); AudioConnection patchCord5(adc1, rms2); AudioConnection patchCord6(mixer1, dac1); AudioConnection patchCord7(mixer1, rms1); // GUItool: end automatically generated code const int LED = 13; void setup() { // Audio connections require memory to work. For more // detailed information, see the MemoryAndCpuUsage example AudioMemory(12); pinMode(LED,OUTPUT); } void loop() { // Do nothing here. The Audio flows automatically if(rms1.available()) { if(rms1.read() > 0.25) { digitalWrite(LED,HIGH); } else { digitalWrite(LED,LOW); } } // When AudioInputAnalog is running, analogRead() must NOT be used. } And Viola!. The PC sees this as an audio device and the LED blinks when the audio starts. PERFECT. Now, the LED will be replaced with the push to talk (PTT) circuit and the audio I/O will connect to the Baofeng through some filters. A single board interface to the radio from a Raspberry Pi that does not require 6 custom cables, and a 3 trips to E-BAY. Now I am waiting for my PCB's from dirtypcb.com This entire bit of work is for my radio system that is being installed on the side of my house. Here is the box: Inside is a Raspberry Pi 3, a Baofeng UV-5R for 2M work, 3 RTL dongles for receiving 1090MHz ADS-B, 978MHz ADS-B, 137MHz Satellite weather, GPS and 1 LoRaWAN 8 channel Gateway. I will write more about the configuration later if there is any interest. Good Luck.
  10. 2 points
    Comments I was musing over a piece of code this week trying to figure out why it was doing something that seemed to not make any sense at first glance. The comments in this part of the code were of absolutely no help, they were simply describing what the code was doing. Something like this: // Add 5 to i i += 5; // Send the packet sendPacket(&packet); // Wait on the semaphore sem_wait(&sem); // Increment thread count Threadcount++; These comments just added to the noise in the file, made the code not fit on one page, harder to read and did not tell me anything that the code was not already telling me. What was missing was what I was grappling with. Why was it done this way, why not any other way? I asked a colleague and to my frustration his answer was that he remembered that there was some discussion about this part of the code and that it was done this way for a very good reason! My first response was of course "well why is that not in the comments!?" I remember having conversations about comments being a code smell many times in the past. There is an excellent talk by Kevlin Henney about this on youtube. Just like all other code smells, comments are not universally bad, but whenever I see a comment in a piece of code my spider sense starts tingling and I immediate look a bit deeper to try and understand why comments were actually needed here. Is there not a more elegant way to do this which would not require comments to explain, where reading the code would make what it is doing obvious? WHAT vs. WHY Comments We all agree that good code is code which is properly documented, referring to the right amount of comments, but there is a terrible trap here that programmers seem to fall in all of the time. Instead of documenting WHY they are doing things a particular way, they instead put in the documentation WHAT the code is doing. As Henney explains English, or whatever written language for that matter, is not nearly as precise a language as the programming language used itself. The code is the best way to describe what the code is doing and we hope that someone trying to maintain the code is proficient in the language it is written in, so why all of the WHAT comments? I quite like this Codemanship video, which shows how comments can be a code smell, and how we can use the comments to refactor our code to be more self-explanatory. The key insight here is that if you have to add a comment to a line or a couple of lines of code you can probably refactor the code into a function which has the comment as the name. If you have a line which only calls a function that means that the function is probably not named well enough to be obvious. Consider taking the comment and using it as the name of the function instead. This blog has a number of great examples of how NOT to comment your code, and comical as the examples are the scary part is how often I actually see these kinds of comments in production code! It has a good example of a "WHY" comment as follows. /* don't use the global isFinite() because it returns true for null values */ Number.isFinite(value) So what are we to do, how do we know if comments are good or bad? I would suggest the golden rule must be to test your comment by asking whether is it explaining WHY the code is done this way or if it is stating WHAT the code is doing. If you are stating WHAT the code is doing then consider why you think the comment is necessary in the first place. First, consider deleting the comment altogether, the code is already explaining what is being done after all. Next try to rename things or refactor it into a well-named method or fix the problem in some other way. If the comment is adding context, explaining WHY it was done this way, what else was considered and what the trade-offs were that led to it being done this way, then it is probably a good comment. Quite often we try more than one approach when designing and implementing a piece of code, weighing various metrics/properties of the code to settle finally on the preferred solution. The biggest mistake we make is not to capture any of this in the documentation of the code. This leads to newcomers re-doing all your analysis work, often re-writing the code before realizing something you learned when you wrote it the first time. When you comment your code you should be capturing that kind of context. You should be documenting what was going on in your head when you were writing the code. Nobody should ever read a piece of your code and ask out loud "what were they thinking when they did this?". What you were thinking should be there in plain sight, documented in the comments. Conclusion If you find that you need to find the right person to maintain any piece of code in your system because "he knows what is going on in that code" or even worse "he is the only one that knows" this should be an indication that the documentation is incomplete and more often than not you will find that the comments in this code are explaining WHAT it is doing instead of the WHY's. When you comment your code avoid at all costs explaining WHAT the code is doing. Always test your comments against the golden rule of comments, and if it is explaining what is happening then delete that comment! Only keep the WHY comments and make sure they are complete. And make especially sure that you document the things you considered and concluded would be the wrong thing to do in this piece of code and WHY that is the case.
  11. 2 points
    I think specifically we need to know what processor you are trying to use as this differs from device to device. The simplest and most generic answer would be to add the UART to your project and click on the checkbox to enable interrupts for the driver. After generating code you will have to set the callback which you want called when the interrupt occurs. After this you need to make sure you are enabling interrupts in your main code and it should work. If you supply us with the details above I will post some screenshots for you on how to do this. Just to show you the idea I picked the 16F18875 and added the EUSART as follows: You can see I clicked next to "Enable EUSART Interrupts" Then in my main I ensured the interrupts are enabled. When I now run the code the ISR created by MCC is executed every time a byte is received. The ISR function is called EUSART_Receive_ISR and it is located in the eusart.c file. You can edit this function or replace it by setting a different function as ISR by calling EUSART_SetRxInterruptHandler if you want to change the behavior.
  12. 2 points
    I have seen lots of code that is tightly tied to specific hardware or to specific frameworks. This code is OK because it generally satisfies rule #1 (it must work) but as soon as the HW or framework changes this code becomes very difficult to adapt to the new system. Developers often state that they are abstracted from the hardware by the framework but this is generally never the case because the framework was provided by the hardware vendor. So what is a developer to do? Step #1 Ask the right question. Instead of asking HOW do I do a thing (how do I send bytes over the UART). The developer should ask WHAT do I need to do. Ideally the developer should answer this WHAT question at a pretty high level. WHAT do I need to do? I need to send a packet over RS485. Step #2 Define an API that satisfies the answers to the WHAT questions. If I must send a packet over RS485, then perhaps I need a SendPacket(myPacket) function. In the context of my application this function will be 100% clear to my partner developers. Step #3 Implement a trial of my new API that runs on my PC. This is sufficiently abstract that running my application on my development PC should be trivial. I can access a file, or the network, or the COM ports, or the STDIO and still satisfy the API. Get my partners to kick it around a bit. Repeat #1,#2 & #3 until the API is as clear as possible for THIS application. Step #4 Implement the new API on my HW or framework. This may seem like contributing to Lasagna code.... i.e. just another layer. But in fact this is the true definition of the hardware abstraction layer. ALL details of the HW (or framework) that are not required for THIS application are hidden away and no longer contribute to developer confusion. 100% of what is left is EXACTLY what your application needs. Now you have a chance at producing that mythical self documenting code. You will also find that unit testing the business logic can be more easily accomplished because you will MOCK all functions at this new API layer. Hardware NEVER has to be involved. Good Luck.
  13. 2 points
    I have been working on "STEM" activities with kids of all ages for quite some time. Usually it is with my own kids but often it has been with kids at schools or in the neighborhood. This activity has been very rewarding but there are a few challenges that can quickly make the experience less interesting for the kids and an exercise in frustration for you, the mentor. 1) Don't be spontaneous (but fake it well) - My daughter and I wired a display to a nano and wrote the code to count 0 to 9. This was a perfect bite sized project because I was able to write enough 7-segment abstraction (struct digit { int a:1; int b:1; etc...}; ) to quickly stick a number on the display and I left enough missing code to have her "help" by identifying which segments needed to be active to draw each number. This was a ton of fun and she was suitably engaged. However, on previous occasions we took on too much and the "library" that needed to be thrown together to bring the complexity into reach by a 7 year old was more that I could deliver inside her attention span. So you do need to be prepared for when the kids are motivated to play with electronics... but some of that preparedness might be a stock of ready to go code modules that you can tap into service. 2) Be Prepared with stuff. - I like to keep a pretty well stocked assortment of parts, tools and ingredients for many projects. With prices for components so cheap, I always buy a few extra's for the stock pile to enable a kid with a sudden itch to do something cool. Unfortunately, there are often a few unintended hurdles. For example: I have a large collection of 7-segment displays and a small pile of Arduino Nano's. 3) 3D printers are fun and interesting.... but laser cutters are better and scissors are best. We all like to show off the amazing things you can do with a 3d printer and I have 3 of them. Unfortunately, using a printer requires a few things. a) patience, b) learning to 3d model, c) patience. My kids are quite good at alerting me when my print has turned into a ball of yarn. But none of the kids has developed any interest in 3d modeling for the 3d printer. I also have a fairly large laser cutter. This is FAR more fun and the absolute best tool I have put into my garage. My laster cutter is 130W and cuts about 1.5meters x 1meter or so. We have cut the usual plywood and acrylic. We also cut gingerbread, fabric, paper, and cardboard. (Laser cut gingerbread houses taste bad) I can convert a pencil sketch into a DXF file in a few minutes....BUT the scissors are better for that quick and dirty experiment. which leads to.... 4) Fail Fast and with ZERO effort.... Kids HATE TO WASTE THEIR TIME. Of course what they consider wasted time and what you and I consider wasted time is a different discussion. For example: folding laundry is a waste of time because you will just unfold the laundry to wear the clothes. So it is better to jam everything under the bed. Taking 2 hours to design a 3d model for the laser cutter or 3d printer is a waste of time if the parts don't work when you are done. However, if you can quickly cut something out of cardboard with scissors or a knife, then the time cost is minimal and if it doesn't work out, they are not sad. I have often had a sad kid after an experiment that took a large amount of effort. I thought the experiment worked well and we all learned something but the "wasted effort" was a problem. I have also seen grownups ride a project down in flames because it was "too big to fail" or "we will have wasted all that money if we quit now".. This is the same problem on a grand scale as the kids. So teach them to fail fast and learn from each iteration. As the project develops, the cool tools will be used but not in the first pass. 5) Pick your battles. Guide your charges with small projects that can be "completed" in 30 minutes or so. DO NOT nag them to "finish" if it is not done on the first outing. If the kid finds that project fun, they will hound you to work on it with them. As they develop skills, they will work on parts themselves while you are not around. (watch out for super glue and soldering irons). This is the ideal situation. So you need to do teasers and have fun. They will come back to the fun projects and steer clear of boring ones. So what has worked for me? 1) Rockets. I have bought 12 packs of rockets as classroom kits. I keep a few on stock pile. Once you have a field to fly them you can always get an entire group of kids ready to fly small rockets in an hour or so and they are fun for all ages. 2) Paper Airplanes. Adults and kids can easily burn an afternoon with paper airplanes. Kids by themselves will quickly tire of it so teach them to fold a basic airplane, how to throw and add a little competition. Don't forget to include spectacular crashes as a competition goal because that will keep their spirits up when problems occur. 3) VEX Robotics. I have done FIRST robots, Lego League and VEX robotics. My favorite is VEX IQ because the budget can be reached by a small group of families and the field fits on the back porch. I did have to bribe one daughter who was doing the code with cookies. This started a tradition of "cookies and code". Each task completed earns a cookie. Each bug fixed is a cookie. The rewards are fantastic! 4) Robotics at Home. Robotics are good for kids because they incorporate so many aspects of engineering (Mechanical, Electrical, Software) into one package. You can easily fill in any of these elements while the child explores their interest. One of my daughters likes to build robots. Another likes to program them. I simply remove any technical obstacles, hopefully before they notice them coming. This allows them to keep living in the moment and solving the problems at their level. 5) SCIENCE!. Be prepared to take ANY question they have and turn it into a project. We launched baking soda & vinegar rockets. I did 3d print them so I had to plan ahead. We have also recreated Galileo's gravity experiments in our stairwell. We recorded the difference in the impact of different objects by connecting a microphone to an oscilloscope. We placed the microphone under a piece of wood so the objects would make a sharp noise. We then spent the time trying to release objects at exactly the same time. We used a lever to lift a car!. The child was 5. The lever was a 3 meter steel tube. The car was a small Jeep. We did not lift it very far and we bent the lever but the lesson was learned and will never be forgotten. 6) Change the Oil! Or any other advanced chore. Involve the child in activities that are beyond them but don't leave them stranded. I make sure my new drivers have changed the oil and a tire. I try to involve the younger kids just because they will be underfoot anyway. A child with engineering interests will be make their desires known. In the end you are providing an enriching experience for a child. Keep the experience short & sweet. The objective is to walk away THIS happy. If the experience is positive the child will come back for more. A future lesson can teach ohms law, or software abstraction. The first experiences are just to have fun and do something interesting. Please share your kid project ideas! Include pictures! Good Luck
  14. 2 points
    Remember, this timer counts up.and you get the interrupt when it rolls over. To interrupt at a precise interval, you must compute the number of "counts" required for that interval and then subtract from 65535 to determine the timer load value. void setTimer(unsigned int intervalCounts) { TMR1ON = 0; TMR1 = 65535 - intervalCounts; TMR1ON = 1; } By turning the timer off and then setting the counts and restoring the timer, you can be sure that you will not get unexpected behavior if the timer value is written in an unexpected order. I will cover this topic in the next blog post on timers.
  15. 2 points
    Assembly language may no longer be the mainstream way to write code for embedded systems, however it is the best way to learn how a specific CPU works without actually building one. Assembly language is simply the raw instruction set of a specific CPU broken into easy to remember pneumonics with a very basic syntax. This enables you full control of everything the CPU does without any translation provided by a compiler. Sometimes this is the only reasonable way to do something that cannot be represented by a higher level language. Here is an example from a project I was working on today. Today I wanted to create a 128-bit integer (16 bytes). That means I will need to add, subtract, multiply, etc. on my new 128-bit datatype. I was writing for a 32-bit CPU so this would require 4 32-bit values concatenated together to form the 128-bit value. If we consider the trivial problem of adding two of these numbers together, lets consider the following imaginary code. int128_t foo = 432123421234; int128_t bar = 9873827438282; int128_t sum = foo + bar; But my 32-bit CPU does not understand int128_t so I must fake it. How about this idea. int32_t foo[] = {0x00112233, 0x44556677, 0x8899AABB, 0xCCDDEEFF}; int32_t bar[] = {0xFFEEDDCC, 0xBBAA9988, 0x77665544, 0x33221100}; int32_t sum[4]; sum[0] = foo[0] + bar[0]; sum[1] = foo[1] + bar[1]; sum[2] = foo[2] + bar[2]; sum[3] = foo[3] + bar[3]; But back in grade school I learned about the 10's place and how I needed to carry a 1 when the sum of the one's place exceeded 10. It seems that it is possible that FOO[0] + BAR[0] could exceed the maximum value that can be stored in an int32_t so there will be a carry from that add. How do I add carry into the next digit? In C I would need to rely upon some math tricks to determine if there was a carry. But the hardware already has a carry flag and there are instructions to use it. We could easily incorporate some assembly language and do this function in the most efficient way possible. So enough rambling. Let us see some code. First, we need to configure MPLAB to create an ASM project. Create a project in the normal way, but when you get to select a compiler you will select MPASM. Now you are ready to get the basic source file up and running. Here is a template to cut/paste. #include "p16f18446.inc" ; CONFIG1 ; __config 0xFFFF __CONFIG _CONFIG1, _FEXTOSC_ECH & _RSTOSC_EXT1X & _CLKOUTEN_OFF & _CSWEN_ON & _FCMEN_ON ; CONFIG2 ; __config 0xFFFF __CONFIG _CONFIG2, _MCLRE_ON & _PWRTS_OFF & _LPBOREN_OFF & _BOREN_ON & _BORV_LO & _ZCD_OFF & _PPS1WAY_ON & _STVREN_ON ; CONFIG3 ; __config 0xFF9F __CONFIG _CONFIG3, _WDTCPS_WDTCPS_31 & _WDTE_OFF & _WDTCWS_WDTCWS_7 & _WDTCCS_SC ; CONFIG4 ; __config 0xFFFF __CONFIG _CONFIG4, _BBSIZE_BB512 & _BBEN_OFF & _SAFEN_OFF & _WRTAPP_OFF & _WRTB_OFF & _WRTC_OFF & _WRTD_OFF & _WRTSAF_OFF & _LVP_ON ; CONFIG5 ; __config 0xFFFF __CONFIG _CONFIG5, _CP_OFF ; GPR_VAR UDATA Variable RES 1 SHR_VAR UDATA_SHR Variable2 RES 1 ;******************************************************************************* ; Reset Vector ;******************************************************************************* RES_VECT CODE 0x0000 ; processor reset vector pagesel START ; the location of START could go beyond 2k GOTO START ; go to beginning of program ISR CODE 0x0004 ; interrupt vector location ; add Interrupt code here RETFIE ;******************************************************************************* ; MAIN PROGRAM ;******************************************************************************* MAIN_PROG CODE ; let linker place main program START ; initialize the CPU LOOP ; do the work GOTO LOOP END The first thing you will notice is the formatting is very different than C. In assembly language programs the first column in your file is for a label, the second column is for instructions and the third column is for the parameters for the instructions. In this code RES_VECT, ISR, MAIN_PROG, START and LOOP are all labels. In fact, Variable and Variable2 are also simply labels. The keyword CODE tells the compiler to place code at the address following the keyword. So the RES_VECT (reset vector) is at address zero. We informed the compiler to place the instructions pagesel and GOTO at address 0. Now when the CPU comes out of reset it will be at the reset vector (address 0) and start executing these instructions. Pagesel is a macro that creates a MOVLP instruction with the bits <15:11> of the address of START. Goto is a CPU instruction for an unconditional branch that will direct the program to the address provided. The original PIC16 had 35 instructions plus another 50 or so special keywords for the assembler. The PIC16F1xxx family (like the PIC16F18446) raises that number to about 49 instructions. You can find the instructions in the instruction set portion of the data sheet documented like this: The documentation shows the syntax, the valid range of each operand, the status bits that are affected and the work performed by the instruction. In order to make full use of this information, you need one more piece of information. That is the Programmers Model. Even C has a programmers model but it does not always match the underlying CPU. In ASM programming the programmers model is even more critical. You can also find this information in the data sheet. In the case of the PIC16F18446 it can be found in chapter 7 labeled Memory Organization. This chapter is required reading for any aspiring ASM programmers. Before I wrap up we shall modify the program template above to have a real program. START banksel TRISA clrf TRISA banksel LATA loop bsf LATA,2 nop bcf LATA,2 GOTO loop ; loop forever END This program changes to the memory bank that contains TRISA and clears TRISA making all of PORT A an output. Next is changes to the memory bank that contains the LATCH register for PORT A and enters the loop. BSF is the pneumonic for Bit Set File and it allows us to set bit 2 of the LATA register. NOP is for No OPeration and just lets the bit set settle. BCF is for Bit Clear File and allows us to clear bit 2 and finally we have a branch to loop to do this all over again. Because this is in assembly we can easily count up the instruction cycles for each instruction and determine how fast this will run. Here is the neat thing about PIC's. EVERY instruction that does not branch takes 1 instruction cycle (4 clock cycles) to execute. So this loop is 5 cycles long. We can easily add instructions if we need to produce EXACTLY a specific waveform. I hope this has provided some basic getting started information for assembly language programming. It can be rewarding and will definitely provide a deeper understanding on how these machines work. Good Luck
  16. 2 points
    Time for part 2! Last time, I gave you the homework of downloading and installing MPLAB and finding a Curiosity Nano DM164144 . Once you have done your homework, it is time for STEP 3, get that first project running. Normally my advice would be to breakout Mplab Code Configurator and get the initialization code up and running, but I did not assign that for homework! So we will go old school and code straight to the metal. Fortunately, our first task is to blink an LED. Step 1: Find the pin with the LED. A quick check of the schematic finds this section on page 3. This section reveals that the LED is attached to PORT A bit 2. With the knowledge of the LED location, we can get to work at blinking the LED. The first step is to configure the LED pin as an output. This is done by clearing bits in the TRIS register. I will cheat and simply clear ALL the bits in this register. Next we go into a loop and repeatedly set and clear the the PORT A bit 2. #include <xc.h> void main(void) { TRISA = 0; while(1) { PORTA = 0; PORTA = 0x04; } return; } Let us put this together with MPLAB and get it into the device. First we will make a new project: Second, we will create our first source file by selecting New File and then follow the Microchip Embedded -> XC8 Compiler -> main.c give your file a name (I chose main.c) And you are ready to enter the program above. And this is what it looks like typed into MPLAB. But does it work? Plug in your shiny demo board and press this button: And Voila!, the LED is lit... but wait, my code should turn the LED ON and OFF... Why is my LED simply on? To answer that question I will break out my trusty logic analyzer. That is my Saleae Logic Pro 16. This device can quickly measure the voltage on the pins and draw a picture of what is happening. One nice feature of this device is it can show both a simple digital view of the voltage and an analog view. So here are the two views at the same time. Note the LED is on for 3.02µs (microseconds for all of you 7'th graders). That is 0.00000302 seconds. The LED is off for nearly 2µs. That means the LED is blinking at 201.3kHz. (201 thousand times per second). That might explain why I can't see it. We need to add a big delay to our program and slow it down so humans can see it. One way would be to make a big loop and just do nothing for a few thousand instructions. Let us make a function that can do that. Here is the new program. #include <xc.h> void go_slow(void) { for(int x=0;x<10000;x++) { NOP(); } } void main(void) { TRISA = 0; while(1) { PORTA = 0; go_slow(); PORTA = 0x04; go_slow(); } return; } Note the new function go_slow(). This simply executes a NOP (No Operation) 10,000 times. I called this function after turning the LED OFF and again after turning the LED ON. The LED is now blinking at a nice rate. If we attach the saleae to it, we can measure the new blink. Now is is going at 2.797 times per second. By adjusting the loop from 10,000 to some other value, we could make the blink anything we want. To help you make fast progress, please notice the complete project Step_3.zip attached to this post. Next time we will be exploring the button on this circuit board. For your homework, see if you can make your LED blink useful patterns like morse code. Good Luck Step_3.zip
  17. 2 points
    Plus you have colleges still making their students use C18... https://www.microchip.com/forums/m1083909.aspx
  18. 2 points
    https://amzn.to/2BR0Xnr (Link to the book on Amazon) I have been a fan of Michael Barr since I read about his work with the Toyota Unintended Acceleration case where his team was able to identify and reproduce some very exotic conditions where automotive software failed. (Every Embedded Engineer should at least read about that case, the link is a good summary) I recently noticed a book co-authored by Barr called “Programming Embedded Systems” and I was excited to get my hands on it as I expected to learn a couple of profound tricks from a master (the book itself is pretty old, 2nd edition published in 2006). Unfortunately the book did not live up to my expectations. Since I am often asked which book I would recommend to start with, I was interested in taking this one for a test drive. Looking over the table of contents I thought that this may be a good beginner’s book, introducing embedded systems. Fair enough. Table of Contents: Introduction Getting to Know the Hardware Your First Embedded Program Compiling Linking and Locating Downloading and Debugging Memory Peripherals Interrupts Putting it All Together Operating Systems eCOS Examples Embedded Linux Examples Extending Functionality Optimization Techniques The first thing I found odd was the choice of development platform. The book is very tightly coupled to the Arcom VIPER-Lite development kit which features a 200MHz PXA255 Xscale processor (based on the ARM v.5TE architecture). This is a PC/104 form factor board and boasts 64MB of SDRAM and 16MB of “ROM” of which 1MB is dubbed as “BOOT ROM”. In fact it has almost every property which the book uses in Chapter 1 to describe what would NOT be an embedded system! I quote "The design of an embedded system to perform a dedicated function is in direct contrast to that of the personal computer. It too is comprised of computer hardware and software and mechanical components (disk drives, for example). However, a personal computer is not designed to perform a specific function. Rather, it is able to do many different things. Many people use the term general-purpose computer to make this distinction clear.". Looking at the VIPER PC104 computer it seems to me very much general purpose and I would by that definition classify the platform chosen here as a "general-purpose computer" instead. Besides, in my world of 8-bit microcontrollers where 16KB of FLASH and 1KB of RAM is large I do not easily consider a machine with 64MB of SDRAM as an embedded system. I do hate to make the distinction between the two myself though (would you consider e.g. your cellular phone an embedded system? It has more power than the original IBM PC or the Apple II after all!). But to be fair the concepts do transfer and the board at least does not have a Keyboard, mouse and VGA port, so I will go along with that if you could get your hands on one! I wanted to get at least the specs for the board only to find that the board is no longer available from Eurotech (it took me some time to realize that in 2007 (11 years ago - 1 year after publication of the 2nd edition of this book) Arcom was re-named to Eurotech). In fact it looks like this board has become entirely obsolete and unobtainable which is a pity as the book’s examples are so tightly coupled to this board that you would not be able to “follow along” if you did not have it at hand. If only they had just used a simple 8051, AVR or PIC or even an Arduino board instead of this very specialized and unique board nobody has ever heard of! Over all the book seems to try and do way too much in too little depth, so we are covering the very basics like introducing what is an embedded system and what is a peripheral through to Embedded Linux and Real Time Systems in one book of 260 odd pages. The result is that the book barely scratches the surface on each topic and has almost no meat behind any of these topics. O’Reilly has entire books dedicated to topics which are dismissed in one paragraph in this one. I was e.g. amused in Chapter 2 when there was a section on “Schematic Fundamentals” showing the symbols used for a resistor, capacitor and a diode and an introduction to what a timing diagram looks like. Chapter 7 explains how bitwise AND, OR and XOR can be used to manipulate bits, etc. but in Chapter 9 the examples start using function pointers without skipping a beat or explaining what a pointer is or how they can point at functions, just assuming the same reader who needed to be explained how to mask bits will be adept at using pointers ... There were also a number of subjective “facts” in the book which I do not quite agree with, e.g. this table: I seriously question the numbers in this table, e.g. I would say from experience that the number of units sold goes up as systems become smaller and simpler. I think I would go out of business rather quickly if I did $100K developments for products selling at $10 a piece and never sold more than 100 pieces of these. I would also say that my car’s ABS controller (likely on the lower end of that resource scale) must be fail-proof, while my Cellular Phone (on the High scale) requires me restarting it regularly, and I would say my ABS computer would have a life of 10’s of years while my phone is hardly going to live for more than 2. So let’s just agree that plenty of embedded systems below 64KB are safety critical and sells millions of units and use nanowatts of electricity so I disagree with the last 4 rows there. It seems like this book is trying to cater for fairly experienced programmers as well as complete novices. My opinion is that any book should always pick a specific audience and speak to that audience well. If you try to speak to everyone you will please nobody. If the reader is someone so green that they cannot mask in bits and do not know how to compile and link their program the same book is not appropriate for showing people real-time OS concepts, schedulers and function pointers. I think the book leaps forward too far too often in a way that newcomers will feel that half of the book was so basic that they already know this, and the other half is so advanced that it would be out of their reach to grasp. And for experts the advanced parts of the book are so shallow that they would get very little value out of it, and of course the other half would be a waste of time. So in summary I think the book covers a lot of ground by scratching the surface on just too many concepts without getting deep enough into any of them to really teach anybody anything new about Embedded Programming. I always make sure to read a couple of the good and bad reviews on Amazon and the reviews there were very much in line with my experience. Like one of the reviewers this was the first time I bought an O’Reilly book where I really felt that it was not worth the money. One reviewer I think summed it up the best “... To experienced programmer, this is never the book for you. To beginner, maybe this is easier to understand, but i really dont think this will help you in EMBEDDED C programming...” you can read his full review as well as the others here https://www.amazon.com/review/R10VHMT76YWZVV/ref=cm_cr_srp_d_rdp_perm?ie=UTF8&ASIN=1565923545
  19. 2 points
    I think you will be pretty happy with the PIC16F1xxxx for that kind of project. The main advantage of the K42 would be the additional RAM and FLASH. I totally agree with @N9WXU there - when you are adding wifi and communication protocols that counts for a lot and you may end up constrained if you are trying to do it all on the smaller PIC16. If that is the next step you may find yourself porting the code right after writing it, which is never a good timeline. The K42 has up to 128KB of FLASH and 8KB of RAM while the PIC16 is capped at 56KB of FLASH and 4KB of RAM. Although it should be fairly straightforward to port the PIC16F18877 code to any K42 if it is written in C and if you use something like MCC to generate the hardware code for you. If you need help with that part please do give us a shout, we can definitely lend a hand with that part!
  20. 2 points
    Everything in your list of requirements looked pretty reasonable for the PIC16 until you said wireless. For wireless, a lot depends upon the details. Bluetooth, WiFi, Subgig, LoRa, etc will all require some kind of RF module. Each RF module generally comes with a co-processor that handles the heavy lifting for the wireless protocol, however... Each wireless module will still have demands upon the host processor. For example, the WINC1500 wifi driver on an ATMEGA4808 took around 10k. After reviewing the driver, it would be possible to shrink the code quite a lot but it would be at the expense of supporting the code for the duration. That is not to suggest that wirelesss is not appropriate for a PIC16, it is just that wireless often comes with additional expectations that are not immediately apparent. Here are a few: 1) Security, most wireless applications require a secure and authenticated connection to the host. 2) phone interface, many wireless applications seem to be tied to iOS and Android. This generally implies BLE4.2 but could mean wifi. 3) IoT. Obviously wireless existed before the internet, most of the wireless customer discussions ended up being Internet of Things discussions. This drove the security requirements. 4) Over the air update. because security and wireless standards are moving fast, most customers end up requiring the ability to update the software to adapt with the times. When you start going through these requirements you can quickly find yourself in a 128k MCU and that will be a PIC18 or AVR in 8-bit land. A reasonable rule of thumb with these kinds of applications is the PIC16/18 will require 2x the FLASH as an AVR for the same application. The details on why this is true would be a future technical post.
  21. 2 points
    Hi Keith, Welcome to our forum! We aim to please 🙂 Short answer is that I think the pic16F18877 is an excellent choice to replace the 16F887. It can pretty much do everything the 887 can do better since it can go 50% faster, and has a lot more to offer like CLC’s, PPS on the pins and a host of other features. Perhaps you can give us a little more detail on what you are aiming to do and how. Will you be using XC8 and using C for this and what are you looking for in terms of analog? There are also a lot of interesting goto parts in the PIC18 family, in particular look at the 18FxxK42, it has interesting additional features like DMA and vectored interrupts which are not on the PIC16F1 parts. They also have more flash and RAM if that is what you need.
  22. 2 points
    Even better. The USB programming interface code is all on GitHub HERE-> https://github.com/MicrochipTech/XPRESS-Loader
  23. 1 point
    I just want to say thank you. I learned a lot from this post.
  24. 1 point
    You have mis-interpreted the table, each row is meant to be independent. I thought the same thing, until I realized that the columns were not to be taken as a whole, but just defining low, medium, or high for each independent category column.
  25. 1 point
    It goes into flash, the USB device has code to reprogram the device flash.
  26. 1 point
    A few weeks ago, I installed shop air in my garage. I was pretty proud when it held 150psi all night. But of course I did not quite tighten a connection and at 2 AM (or so my daughter tells me) there was a loud bang followed by a steady "compressor" noise. I did not notice until the next morning when I wondered why there was a noise from the garage. That compressor was pretty hot for running 6 hours straight. Of course this could be stopped by turning the compressor off each night. But, I write embedded software for a living and lately I have been deep into IoT projects. Naturally, this was an ideal chance to do something about my dumb compressor. Ingredients First, I needed a way to switch the compressor on-off remotely. These Sonoff switches are almost perfect. On the plus side, they have an ESP8266 inside so I can run TASOMOTA which is a generic Home Automation / IoT firmware for all things 8266. On the down side, they only are good for 10A. So I added a 120VAC 2 pole relay good for 30A. The compressor has a 16.6A motor draw so some overkill seems appropriate. I refreshed the Sonoff Basic with Tasmota and installed everything inside a metal electrical box. And when I visited the web page: I can turn the compressor on/off from my phone. Fantastic! As long as I had everything opened up, I went ahead and added 2 pressure sensors. Left and right of the primary pressure regulator. The left side sensor goes to the compressor and lets me know what it is doing. I am now tempted to remove the mechanical hysteretic controller on the compressor and simply use the Sonoff switch and some electronic pressure sensing to do the same thing. We shall see. Everything is now in place to ensure the compressor can be automatically turned off, or have a maximum run limit. The only thing left is software! Good Luck.
  27. 1 point
    I have not used harmony or web net server so I have not run into this directly. But there may be a few other places to check that cause resets on other systems. Often the assert() functions will end in a software reset, so your code may not call the reset directly, but if you use assert in your error checks you will reset Some malloc libraries will fail with a reset if there is a heap failure.i.e. the stack runs into the heap. This is often detected with a no-mans land between the stack and the heap. The no-mans land is filled with a magic number. If the magic number changed, the stack ran into the no-mans land and may have corrupted the heap.
  28. 1 point
    Sometimes I get the sad impression that embedded FW engineers only understand 1 data container, the array. The array is a powerful thing and there are many good reasons to use it but it also has some serious problems. For instance, a number of TCP/IP libraries use arrays to hold a static list of sockets. When you wish to create a new socket the library takes one of the unused sockets from the array and returns a reference (a pointer or index) so the socket can be used as a parameter for the rest of the API. It turns out that sockets are somewhat heavy things (use lots of memory) so you always want to optimize the system to have the smallest number of sockets necessary. Unfortunately, you must "pick" a reasonable number of sockets early in the development process. If you run out of sockets you must go back and recompile the library to utilize the new socket count. Now there is a dependency that is not obvious, only fails at run time and links the feature count of the product with the underlying library. You will see bugs like, "when I am using the app I no longer get notification e-mails". It turns out that this problem can be easily solved with a dynamic container. i.e. one that grows at runtime as you need it to. A brute force method would perhaps be to rely upon the heap to reallocate the array at runtime and simply give the library a pointer to an array. That will work but it inserts a heavy copy operation and the library has to be paused while the old array is migrated to the new array. I propose that you should consider a Linked List. I get a number of concerns from other engineers when I have made this suggestion so just hang tight just one moment. Concerns Allocating the memory requires the heap and my application cannot do that. Traversing the list is complicated and requires recursion. We cannot afford the stack space. A linked list library is a lot of code to solve this problem when a simple array can manage it. The linking pointers use more memory. If you have a new concern, post it below. I will talk about these first. Concern #1, Memory allocation I would argue that a heap is NOT required for a linked list. It is simply the way computer science often teaches the topic. Allocate a block of memory for the data. place the data in the block of memory. Data is often supplied as function parameters. insert the block into the list in the correct place. Computer science courses often teach linked lists and sorting algorithms at the same time so this process forms a powerful association. However, what if the process worked a little differently.j Library Code -> Define a suitable data structure for the user data. Include a pointer for the linked list. User Code -> Create a static instance of the library data structure. Fill it with data. User Code -> Pass a reference to the data structure to the library. Library Code -> insert the data structure into the linked list. If you follow this pattern, the user code can have as many sockets or timers or other widgets as it has memory for. The library will manage the list and operate on the elements. When you delete an element you are simply telling the library to forget but the memory is always owned by the user application. That fixes the data count dependency of the array. Concern #2, Traversing the list is complex and recursive. First, Recursion is always a choice. Just avoid it if that is a rule of your system. Every recursive algorithm can be converted to a loop. .Second, Traversing the list is not much different than an array. The pointer data type is larger so it does take a little longer. struct object_data { int mass; struct object_data *nextObject; }; int findTheMassOfTheObjects(struct object_data *objectList) { thisObject = objectList; while(thisObject) { totalMass += thisObject->mass; thisObject = thisObject->nextObject; } printf("The mass of all the objects is %d grams\n", totalMass); return totalMass; } So here is a quick example. It does have the potential of running across memory if the last object in the list does NOT point at NULL. So that is a potential pitfall. Concern #3, A linked list library is a lot of code Yes it is. Don't do that. A generic library can be done and is a great academic exercise but most of the time the additional next pointers and a few functions to insert and remove objects are sufficient. The "library" should be a collection of code snippets that your developers can copy and paste into the code. This will provide reuse but break the dependency on a common library allowing data types to change, or modifications to be made. Concern #4, A linked list will use more memory It is true that the linked list adds a pointer element to the container data structure. However, this additional memory is probably much smaller than the "just in case" additional memory of unused array elements. It is probably also MUCH better than going back and recompiling an underlying library late in the program and adding a lot more memory so the last bug will not happen again. A little history The linked list was invented by Allen Newell, Cliff Shaw and Herbert Simon. These men were developing IPL (Information Processing Language) and decided that lists were the most suitable solution for containers for IPL. They were eventually awarded a Turing Award for making basic contributions to AI, Psychology of Human Cognition and list processing. Interestingly IPL was developed for a computer called JOHNIAC which had a grand total of 16 kbytes of RAM. Even with only 16KB IPL was very successful and linked lists were determined to be the most suitable design for that problem set. Most of our modern microcontrollers have many times that memory and we are falling back on arrays more and more often. If you are going to insist on an array where a linked list is a better choice, you can rest easy knowing that CACHE memory works MUCH better with arrays simply because you can guarantee that all the data is in close proximity so the entire array is likely living in the cache. Good Luck P.S. - The timeout driver and the TCP library from Microchip both run on 8-bit machines with less than 4KB of RAM and they both use linked lists. Check out the code in MCC for details.
  29. 1 point
    I completely agree with you! I try to avoid comments, because they tend to de-synchronize with the actual code very easily. #pragma config PWRTE = ON // Power-up Timer Enable bit->Power up timer disabled What does that mean? Is it a bugfix? Is it a bug? I find this so often in mature projects. Someone edited bit masks for a register, but forgot to update the comments. Even the Hungarian notation as an extended version of commenting de-synchronizes eventually, because you tend to ignore the prefixes after a while. uint32_t u16counter; So my personal approach is to find identifiers and names to formulate a readable code, which says everything what is actually happening on that specific line of code. There is no need to write functions with many lines of code to make the compiling more efficient or save stack space, modern compilers will optimize that out again. Encapsulating a partial solution not only brings structure to your code, it makes the function actually readable. And with readable I mean to read it out loud in front of an audience (e.g. for a code review). Comments should give the reader a general overview of the problem and an abstract strategy what has to be done in a function. If the purpose is obvious and easily derivable from the identifiers, why create a second meta layer which needs extra maintenance and creates a dependency? This in turn often means that an over-commented code doesn't have a good structure or the author doesn't understand the problem well enough to abstract it. Although sometimes the project dictates code metrics, like a code to comment ratio that has to be satisfied. I see a lot of projects with Doxygen comments in it, but the actual content of that documentation is rather unhelpful.
  30. 1 point
    This morning I find Microchip.com is down 07:15 PDT ( GMT - 8 )
  31. 1 point
    According to your zip package, PBCLK is set to 48 MHz, not 96MHz? And it's a PIC32MX470F512L, just for clarity 🙂 It would be a good idea to debug your project and have a look at the actual register settings, to see if Harmony did everything right. I would also suggest to check the analog output signal with a scope to check the frequency and the signal quality. Is there a signal at higher baud rates and how does it look like?
  32. 1 point
    Of course, the simulator may be fine. It may just be Harmony that's not implementing USART via DMA correctly...
  33. 1 point
    Of course the answer here is that the semantics for inline in C89 do not exist (there is no such thing as inline), in GNU89 you can define a C function in a C file and place inline on it as a hint for the optimizer, BUT in C99 and C11 this is illegal, in C99 and C11 the standard specifies that the only legal way to use inline is to have the function body in the header file ... Read this for some background: https://gustedt.wordpress.com/2010/11/29/myth-and-reality-about-inline-in-c99/ This is the relevant part "So you’d have to put the implementation of such candidate functions in header files." EDIT: To be super pedantic about it, the standard does not exactly say you must place the definition in the header file, but if you want to use the function from more than one C file you will have to do exactly that for it to work.
  34. 1 point
    If you have purchased a "MPLAB(R) Xpress PIC18F47K40 Evaluation Board" from Microchip (part number DM182027) and you are running into difficulty because the board is behaving strangely it is most likely caused by a silicon errata on this device! The errata can be downloaded here: http://ww1.microchip.com/downloads/en/DeviceDoc/PIC18F27-47K40-Silicon-Errata-and-Data-Sheet-Clarification-80000713E.pdf The relevant section of the Errata is shown at the end. What is happening is that the compiler is using a TBLRD instruction somewhere and this instruction is not behaving as expected due to a silicon bug in REV A2 of the PIC18F47K40, causing the read to fail and the program to malfunction. Typically this happens as part of the C initialization code generated by the XC8 compiler, and since the compiler is optimizing, changing the code may cause the problem to temporarily disappear because you have few enough global variables that a table read is no longer the fastest way to initialize the memory segment for variables with static linkage. The XC8 compiler can avoid generating the sequence which will cause the failure if you tell it in the linker settings to implement the workaround for this Errata. This is done by adding +NVMREG to the setting as follows. Note that this is under the section "XC8 Linker" and the Option Category "Additional Options". This is the relevant section of the Errata.
  35. 1 point
    Awesome, the forum will give you more permission to post files and images after you have made at least 3 posts.
  36. 1 point
    Hi KM1, If we load RTCOUNTER is loads the "Timer0 without interrupt" but, i see that the interrupt is enabled in the code. Could you please give some details on why Interrupt is enabled. Also, I see EUSART and SPI-MASTER code in it if you share some details on this then it may be helpful.
  37. 1 point
    In the ivory tower of the university price is not that important. If you build less than 10 devices even a difference of 5$/€ for each would not be worth to waste 1h of work 😉 For he student board I linked above which is sold a few hundred times it is a little bit more important to keep the price for all the components in the range of 10€ to make it attractive for them to get and build their own. (let them handle and solder the different sized components is one additional intention) A 18F2xK22 is is for sure not the cheapest uC you can get, but in 2013 it was one of the most feature rich in the 8-Bit PIC range that is relative easy to understand with all of its peripherals for beginners in mikrocontroller programming. Of course the presence of lots of tools like PICkit3 and knowledge of the MPLAB IDE and PIC18 at the side of the tutors in the labs was influencing the choice. 😉 I think the tools are very important especially if you do it not solely work with them or if you want to use them for teaching. At the moment there is another subject where Python will be used as a fist programming language for our students. A small part of this subject should be about embedded programming too. So we play around with circuitPython and microPhyton to see if it is easy enough to get an insight in just two or three lessons. Therefore we bought some tiny boards with SAMD21 and ESP32 controllers. Maybe some day we will build our own circuit/micropython board as we did with the PIC one but for the beginning it es more easy to use such controllers with huge libraries already available for them.
  38. 1 point
    Hi! I just signed up, reading your stories is great :) Let's if I can contribute a few more tales.
  39. 1 point
    “Code generation, like drinking alcohol, is good in moderation.” — Alex Lowe This episode we are going to try something different. The brute force approach had the advantage of being simple and easy to maintain. The hand-crafted decision tree had the advantage of being fast. This week we will look at an option that will hopefully combine the simplicity of the string list and the speed of the decision tree. This week we will use a code generator to automatically create the tokenizing state machine. I will leave it to you to decide if we use generation in moderation. Let me introduce RAGEL http://www.colm.net/open-source/ragel/. I discovered RAGEL a few years ago when I was looking for a quick and dirty way to build some string handling state machines. RAGEL will construct a complete state machine that will handle the parsing of any regular expression. It can do tokenizing and it can do parsing. Essentially, you define the rules for the tokens and the functions to call when each token is found. For instance, you can write a rule to handle any integer and when an integer is found it can call your doInteger() method. For our simple example of identifying 6 words, the RAGEL code will be a bit overkill but it will be MUCH faster than a brute force string search and in the same ball park as the hand crafted decision tree. Let us get started. First let us get the housekeeping out of the way. This part of the code you have seen before. It is identical to the first two examples I have already provided. There are two differences. First, this only LOOKS like C code. In fact, it is a RAGEL file (I saved it with a .rl extension) and you will see the differences in a moment. When I use a code synthesizer, I like to place the needed command line at the top of the file in comments. While comments are a smell, this sort of comment is pretty important. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 // compile into C with ragel // ragel -C -L -G2 example3.rl -o example3.c // #include <string.h> #include <stdio.h> #include "serial_port.h" char * NMEA_getWord(void) { static char buffer[7]; memset(buffer,0,sizeof(buffer)); do { serial_read(buffer,1); } while(buffer[0] != '$'); for(int x=0;x<sizeof(buffer)-1;x++) { serial_read(&buffer[x], 1); if(buffer[x]==',') { buffer[x] = 0; break; } } return buffer; } enum wordTokens {NO_WORD = -1,GPGGA,GNGSA,GPGSV,GPBOD,GPDBT,GPDCN, GPRMC, GPBWC}; RAGEL is pretty nice in that they choose some special symbols to identify the RAGEL bits so the generator simply passes all input straight to the output until it finds the RAGEL identifiers and then it gets to work. This architecture allows you to simply insert RAGEL code directly into your C (or other languages) and add the state machines in place. The first identifiers we find are the declaration of a state machine (foo seemed traditional). You can define more than one machine so it is important to provide a hint to the generator about which one you want to define. After the machine definition, I specified the location to place all the state machine data tables. There are multiple ways RAGEL can produce a state machine. If the machine requires data, it will go at the write data block. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 %% machine foo; %% write data; enum wordTokens NMEA_findToken(char *word) { const char *p = word; const char *pe = word + strlen(word); int cs; enum wordTokens returnValue = NO_WORD; %%{ action gpgga { returnValue = GPGGA; fbreak; } action gngsa { returnValue = GNGSA; fbreak; } action gpgsv { returnValue = GPGSV; fbreak; } action gpbod { returnValue = GPBOD; fbreak; } action gpdbt { returnValue = GPDBT; fbreak; } action gpdcn { returnValue = GPDCN; fbreak; } action gpbwc { returnValue = GPBWC; fbreak; } action gprmc { returnValue = GPRMC; fbreak; } gpgga = ('GPGGA') @gpgga; gngsa = ('GNGSA') @gngsa; gpgsv = ('GPGSV') @gpgsv; gpbod = ('GPBOD') @gpbod; gpdbt = ('GPDBT') @gpdbt; gpdcn = ('GPDCN') @gpdcn; gpbwc = ('GPBWC') @gpbwc; gprmc = ('GPRMC') @gprmc; main := ( gpgga | gngsa | gpgsv | gpbod | gpdbt | gpdcn | gpbwc | gprmc )*; write init; write exec noend; }%% return returnValue; } Next is the C function definition starting at line 4 above. I am keeping the original NMEA_findToken function as before. No sense in changing what is working. At the beginning of the function is some RAGEL housekeeping defining the range of text to process. In this case the variable p represents the beginning of the test while pe represents the end of the text. The variable cs is a housekeeping variable and the token is the return value so initialize it to NO_WORD. The next bit is some RAGEL code. The %%{ defines a block of ragel much like /* defines the start of a comment block. The first bit of ragel is defining all of the actions that will be triggered when the strings are identified. Honestly, these actions could be anything and I held back simply to keep the function identical to the original. It would be easy to fully define the NMEA data formats and fully decode each NMEA sentence. These simply identify the return token and break out of the function. If we had not already sliced up the tokens we would want to keep store our position in the input strings so we could return to the same spot. It is also possible to feed the state machine one character a time like in an interrupt service routine. After the actions, line 21 defines the search rules and the action to execute when a rule is matched. These rules are simply regular expressions (HA! REGEX and SIMPLE in the same sentence). For this example, the expressions are simply the strings. But if your regular expressions were more complex, you could go crazy. Finally, the machine is defined as matching any of the rules. The initialization and the actual execute code are placed and the RAGEL is complete. Whew! Let us look at what happened when we compile it. One of my favorite programming tools is graphviz. Specifically DOT. It turns out that RAGEL can produce a dot file documenting the produced state machine. Lets try it out. bash> ragel -C -L -V example3.rl -o example3.dot bash> dot example3.dot -T png -O It would be nicer if all the numbers on the arrows were the characters rather than the ASCII codes but I suppose I am nitpicking. Now you see why I named my actions after the sentences. The return arrow clearly shows which action is being executed when the words are found. It also shows that the action triggers when the last letter is found rather than a trailing character. I suppose if you had the word gpgga2, then you would need to add some additional REGEX magic. The dotted arrow IN leading to state 17 refers to any other transition not listed. That indicates that any out-of-place letter simply goes back to 17 without triggering an ACTION. It is possible to define a “SYNTAX ERROR” action to cover this case but I did not care. For my needs, failing quietly is a good choice. This all looks pretty good so far. What does the C look like? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 /* #line 1 "example3.rl" */ // compile into C with ragel // ragel -C -L -G2 example3.rl -o example3.c // #include < string.h > #include < stdio.h > #include "serial_port.h" char * NMEA_getWord(void) { static char buffer[7]; memset(buffer, 0, sizeof(buffer)); do { serial_read(buffer, 1); } while (buffer[0] != '$'); for (int x = 0; x < sizeof(buffer) - 1; x++) { serial_read( & buffer[x], 1); if (buffer[x] == ',') { buffer[x] = 0; break; } } return buffer; } enum wordTokens { NO_WORD = -1, GPGGA, GNGSA, GPGSV, GPBOD, GPDBT, GPDCN, GPRMC, GPBWC }; /* #line 34 "example3.rl" */ /* #line 39 "example3.c" */ static const int foo_start = 17; static const int foo_first_final = 17; static const int foo_error = 0; static const int foo_en_main = 17; /* #line 35 "example3.rl" */ enum wordTokens NMEA_findToken(char * word) { const char * p = word; const char * pe = word + strlen(word); int cs; enum wordTokens returnValue = NO_WORD; /* #line 57 "example3.c" */ { cs = foo_start; } /* #line 62 "example3.c" */ { switch (cs) { tr5: /* #line 45 "example3.rl" */ { returnValue = GNGSA; { p++; cs = 17; goto _out; } } goto st17; tr12: /* #line 47 "example3.rl" */ { returnValue = GPBOD; { p++; cs = 17; goto _out; } } goto st17; tr13: /* #line 50 "example3.rl" */ { returnValue = GPBWC; { p++; cs = 17; goto _out; } } goto st17; tr16: /* #line 48 "example3.rl" */ { returnValue = GPDBT; { p++; cs = 17; goto _out; } } goto st17; tr17: /* #line 49 "example3.rl" */ { returnValue = GPDCN; { p++; cs = 17; goto _out; } } goto st17; tr20: /* #line 44 "example3.rl" */ { returnValue = GPGGA; { p++; cs = 17; goto _out; } } goto st17; tr21: /* #line 46 "example3.rl" */ { returnValue = GPGSV; { p++; cs = 17; goto _out; } } goto st17; tr23: /* #line 51 "example3.rl" */ { returnValue = GPRMC; { p++; cs = 17; goto _out; } } goto st17; st17: p += 1; case 17: /* #line 101 "example3.c" */ if (( * p) == 71) goto st1; goto st0; st0: cs = 0; goto _out; st1: p += 1; case 1: switch (( * p)) { case 78: goto st2; case 80: goto st5; } goto st0; st2: p += 1; case 2: if (( * p) == 71) goto st3; goto st0; st3: p += 1; case 3: if (( * p) == 83) goto st4; goto st0; st4: p += 1; case 4: if (( * p) == 65) goto tr5; goto st0; st5: p += 1; case 5: switch (( * p)) { case 66: goto st6; case 68: goto st9; case 71: goto st12; case 82: goto st15; } goto st0; st6: p += 1; case 6: switch (( * p)) { case 79: goto st7; case 87: goto st8; } goto st0; st7: p += 1; case 7: if (( * p) == 68) goto tr12; goto st0; st8: p += 1; case 8: if (( * p) == 67) goto tr13; goto st0; st9: p += 1; case 9: switch (( * p)) { case 66: goto st10; case 67: goto st11; } goto st0; st10: p += 1; case 10: if (( * p) == 84) goto tr16; goto st0; st11: p += 1; case 11: if (( * p) == 78) goto tr17; goto st0; st12: p += 1; case 12: switch (( * p)) { case 71: goto st13; case 83: goto st14; } goto st0; st13: p += 1; case 13: if (( * p) == 65) goto tr20; goto st0; st14: p += 1; case 14: if (( * p) == 86) goto tr21; goto st0; st15: p += 1; case 15: if (( * p) == 77) goto st16; goto st0; st16: p += 1; case 16: if (( * p) == 67) goto tr23; goto st0; } _out: {} } /* #line 66 "example3.rl" */ return returnValue; } int main(int argc, char ** argv) { if (serial_open() > 0) { for (int x = 0; x < 24; x++) { char * w = NMEA_getWord(); enum wordTokens t = NMEA_findToken(w); printf("word %s,", w); if (t >= 0) printf("token %d\n", t); else printf("no match\n"); } } serial_close(); return 0; } And this is why we use a code generator. The code does not look too terrible. i.e., I could debug it if I thought there were some bugs and it does follow the state chart in a perfectly readable way. BUT, I hope you are not one of those programmers who finds GOTO against their religion. (Though Edsger Dijkstra did allow an exception for low level code when he wrote EWD215 https://www.cs.utexas.edu/users/EWD/transcriptions/EWD02xx/EWD215.html ) So how does this perform? STRNCMP IF-ELSE RAGEL -G2 GNGSA 399 121 280 GPGSV 585 123 304 GLGSV 724 59 225 GPRMC 899 83 299 GPGGA 283 113 298 And for the code size MPLAB XC8 in Free mode on the PIC16F1939 shows 2552 bytes of program and 1024 bytes of data. Don’t forget that printf is included. But this is comparable to the other examples because I am only changing the one function. So our fancy code generator is usually faster than the brute force approach, definitely slower than the hand-crafted approach and is fairly easy to modify. I think I would use the string compare until I got a few more strings and then make the leap to RAGEL. Once I was committed to RAGEL, I think I would see how much of the string processing I could do with RAGEL just to speed the development cycles and be prepared for that One Last Feature from Marketing. Next week we will look at another code generator and a completely different way to manage this task. Good Luck. example3.X.zip
  40. 1 point
    When we left off we had just built a test framework that allowed us to quickly and easily try out different ways to identify NMEA keywords. The first method shown was a brute force string compare search. For this week, I promised to write about an if-else decoder. The brute force search was all about applying computing resources to solve the problem. This approach is all about applying human resources to make life easy on the computer. So this solution will suck. Let us press on. The big problem with the string compare method is each time we discard a word, we start from scratch on the second word. Consider that most NMEA strings from a GPS start with the letters GP. It would be nice to discard every word that does not begin with a G and only look at each letter once. Consider this state machine: I did simplify the drawing… every invalid letter will transfer back to state 1 but that would clutter the picture. This would require the smallest number of searches to find the words. So one way to build this is to write a big IF-ELSE construct that covers all the choices. This will step through the letters and end up with a decision on what keyword was found. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 enum wordTokens NMEA_findToken(char *word) { enum wordTokens returnValue = NO_WORD; char c = *word++; if(c == 'G') { c = *word++; if(c == 'P') { c = *word++; if(c == 'G') // gpGga or gpGsv { c = *word++; if(c == 'G') // gpgGa { c = *word++; if(c == 'A') { if(*word == 0) // found GPGGA { returnValue = GPGGA; } } } else if(c == 'S') // gpgSv { c = *word++; if(c == 'V') { if(*word == 0) // found GPGSV { returnValue = GPGSV; } } } } else if(c == 'B') // gpBod { c = *word++; if(c == 'O') { c = *word++; if(c == 'D') { if(*word == 0) { returnValue = GPBOD; } } } } else if(c == 'D') // gpDcn or gpDbt { c = *word++; if(c == 'C') { c = *word++; if(c == 'N') { if(*word == 0) { returnValue = GPDCN; } } } else if(c == 'B') { c = *word++; if(c == 'T') { if(*word == 0) { returnValue = GPDBT; } } } } } else if(c == 'N') // gNgsa { c = *word++; if(c == 'G') { c = *word++; if(c == 'S') { c = *word++; if(c == 'A') { if(*word == 0) { returnValue = GNGSA; } } } } } } return returnValue; } And it is just that easy. This is fast, small and has only one serious issue in my opinion. I hope you are very happy with the words chosen, because making changes is expensive in programmer time. This example only has 6 6-letter words and is 100 lines of code. They are easy lines, but almost all of them will require rewriting if you change even one word. Here are the stats so you can compare with last weeks string compare. STRNCMP IF-ELSE GNGSA 399 121 GPGSV 585 123 GLGSV 724 59 GPRMC 899 83 GPGGA 283 113 These are the CPU cycles required on a PIC16F1939. You can verify in the simulator. That is all for now. Stay tuned, next time we will show a nice way to manage this maintenance problem. Good Luck example2.c example2.X.zip
  41. 1 point
    Ok I think I get it now. We can separate the quality of the link from the basic budget. We know on a perfect link the slew rate is instant. I will do a 10% bit time slew rate for the table calculation as that is typically what I like to have. Sorry I was too lazy to do the 8x, but it is somewhere inbetween. Over Sampling Samples from Start to centre of stop bit Budget without Sampling Uncertainty As % Sampling Uncertainty Error Budget with Sampling Uncertainty Clock Error Budget Ideal slew Clock Error Budget 10% bit slew 4x 38 +-2 sample periods 5.26% 1 period +-1 sample period +-1.32% +-0.52% 16x 152 +-8 sample periods 5.26% 1 period +-7 sample periods +-2.3% +-2.03% From the PIC16F18875 datasheet the clock accuracy is +-2%, which means that from 0 to 60C between 2.3V to 5.5V you can use the internal oscillator for 2 devices to communicate realiably only if both devices are oversampling at 16x, if you want to use 4x over sampling you will not get reliable communication as the clocks cannot remain within +-0.52%.
  42. 1 point
    How to send an email via dial up modem, circa 1984...
  43. 1 point
    That court transcript was a fantastic read (I also had a good chuckle at "Parody Bits"). That link also led me to the Barr Embedded C Coding Standard, which was also worth a read for me, as I'm currently working under mostly a set of my own rules gathered through just personal experience. Might be finally time to have a formalized standards doc. Shame the textbook is kind of a dud.
  44. 1 point
    Every Christmas I always have some project that I want to do but then then time flies and I can't get it finished. Last year, I decided to actually finish one project and I built the best Christmas lights ever. This was a quick and easy project that needed the following ingredients. You will need multiple LED strings depending upon the size of your tree. Here is 1 LED on the string: You can see the ribbon cable connecting them together. Each LED is about 12mm in diameter. The ribbon cable hides well, but perhaps one day I will paint the wire. One LED, the power supply and the Arduino are inside this box. I used the first LED in the box to light the box and to strain relief the cable. You can pass a USB cable inside if you want to reprogram it. The box is a simple laser cut 6mm plywood box. The HEX hole pattern was overkill and almost caused a fire in the laser! Of course I have a laser cutter in my shop. Everyone should have one. Currently I am simply running the demo code which cycles through a number of animations. One day I plan to add a DMX software in the arduino and a raspberry pi as the controller. Then I can be lazy and write code on the couch via WiFi to change the animations. IMG_0044.mov
  45. 1 point
    Your pro-tip for the day. If you use this code from MCC on a PICmicrocontroller, the function i2c_ISR must be duplicated due to the way the C stack is implemented on a PICmcu. If you want to save code space, simply decide how you want your code to run (interrupt or polled) and remove the call to i2c_isr from the master interrupt code or remove the test in master_operation. Either one will remove the duplication because the function i2c_ISR will not exist in both contexts. (interrupt and "main")
  46. 1 point
    The XPRESS boards do NOT have a debugger. However, they are being replaced with CURIOSITY nano boards. The Curiosity Nano PIC16F18446 and the Curiosity Nano 4809 both raise the XPRESS hardware to the next level. https://www.microchip.com/developmenttools/ProductDetails/DM164144 https://www.microchip.com/DevelopmentTools/ProductDetails/DM320115 The on-board debugger used by these nano boards is also present on the AVR IoT WG development kit. Expect to see lots of new development hardware with on-board debugging in the future. They include : 1) CDC serial port 2) Mass Storage Drag and drop hex file programming 3) Mass Storage drag and drop serial messaging (send a text file with a keyword and it goes out the serial port) 4) MPLAB support with native programming & debug The AVR versions are also supported by Studio. For those with AVR memories, these are updated versions of the old EDBG debuggers. The updates add PICmcu support but they are still CMSIS based debuggers. Here is a project I built using a PIC16F18446 Curiosity Nano directly. This is a model rocket launch controller. More information on the rocket controller is here: https://workspace.circuitmaker.com/Projects/Details/joseph-julicher/rocket-controller
  47. 1 point
    Pretty cool stuff! Around 2004 I rewound a single phase 3/4 HP 120 VAC 4 pole induction motor, to make a 7 VAC, 12 pole,three phase motor. I used a 220V 3 HP VFD and two step-down transformers to run it at 240 Hz which was about 1800 RPM. My idea was to run a three phase motor directly on a 12 VDC battery, or perhaps a 48V battery pack clocked at 4x frequency to get 4x power. I also made a very rough VFD using a PIC16F684, with modified sine (rectangular) waves, and it did run, but after a bit the driver transistors blew out. I know a lot more now about how to properly choose and drive MOSFETs, and that's what I plan to do with this project. I added functionality to my spreadsheet that adds a percentage of 3rd harmonic to the synthesized waveform, and I can reproduce the increased output voltage in that way. This is 15% harmonic. I updated my file: http://enginuitysystems.com/pix/electronics/Three_Phase_Sine_Waves.ods
  48. 1 point
    In circuit debugging a PIC microcontroller requires 3 features from the chip in both hardware and software. 1) a Debug Exec (software) 2) In Circuit Debug Hardware (ICD) (silicon) 3) a method of communicating with the debugger (I/o pins) The software is a special "debug exec" that is placed in special test memory by the debugger. This "debug exec" provides the serial interface to the debugger and follows the debugger's commands to interact with the hardware debugging support. The "debug exec" uses some CPU resources like 1 level of stack and a few bytes of RAM in addition to some program memory. On the "enhanced" PIC16's (all the PIC16F1xxx devices) additional stack, ram and flash memory is added so there is no memory penalty for using the debugger. The hardware is a number of silicon features like address/data match comparators that implement the breakpoints. The number of hardware breakpoints is related to the number of these comparators that are physically implemented in the silicon. The breakpoints must be implemented in hardware if they are to have zero impact on your code. Software breakpoints are literally extra instructions inserted to perform the breakpoint and they cannot be "turned on/off" and they will affect the program size/timing. The I/O pin requirement is for a communications path to the "debug exec" and for an external way to trigger a manual "HALT". This is done through the PGC/PGD pins so no additional pins are "wasted" beyond what you need for programming. Because the hardware portion of debugging requires additional silicon area and only the developers will ever need it, many of the lower cost devices remove it to save cost. Instead a special version of the device is created and sold on a debug header. The debug header (part number AC162059 for the 12F508) provides the ICD silicon so the volume production does not need to pay for a feature they will never use. For low pin count devices such as the 12F508, it is extra tricky because with only 8 total pins and 5 I/O pins, it is not reasonable to build an application and use debug as that will leave 3 I/O pins for your "secret sauce". For these devices the special part on the debug header is actually in a larger package with 8 pins exactly the same as the target device and additional pins to implement programming & debugging. This allows a nearly zero impact solution to debugging but does require buying an additional device. If you are a maker and are concerned with the minimum size option, I recommend you get good at soldering QFN packages. For the same size as an SOIC 8, you will get 20 pins and in a PIC16F1xxx device you will also get debugging, more memory, better peripherals and a lower cost. Good Luck
  49. 1 point
    If you use the __builtin_software_breakpoint() compiler built-in function and look at the list file generated by the compiler you will see that it translates the instruction (0x003) as "trap" - this instruction is not documented in the device instruction set or datasheet, but it easy enough to discover by looking at the lst file. 1211 1212 ;main.c: 17: __builtin_software_breakpoint(); 1213 07F1 0003 trap 1214 1215 ;main.c: 18: __nop(); 1216 07F2 0000 nop 1217 ...
  50. 1 point
    Ok, so every time I set up a pin as an output MCC insists on making it "Analog". It looks like this setting has something to do with the ANSEL register, but surely an output is not Analog so why do they do this?
 


  • Newsletter

    Want to keep up to date with all our latest news and information?
    Sign Up
×
×
  • Create New...