Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Orunmila

  1. I am trying to configure the DAC on the 47K40 Xpress Evaluation board using MCC and I am confused about what I see. The screen I see is to the right -> I can understand that the Vref+ and Vref- fields are necessary when you are using the input pins for a reference but it is really confusing when the Vref+ is shown and it is read-only AND it shows an incorrect value as it does here. I would have expected that if I select Vdd as the reference the Vref+ value would either be hidden, or that it would have the same value as Vdd. I also do not understand why the Output of the DAC is referred to as "Required ref", this talk of "Ref" all over the place is very confusing to the unsuspecting user. You should call the DAC out Value the Required DAC out Value above the line you show the result of the calculation, and when the references are set to Vdd or Vss then Vdd should match Vref+ and Vref- should be 0 even if those are read-only - even better just hide them if they should not be used! It turns out that Vref+ and Vref- are ignored if you have selected Vdd and Vss, and that Required Ref actually means the output that you desire to get from the DAC, for those as confused as I was. When you select the Vref+ pin as the reference then Vdd field is ignored. It also seems that the Vdd and Vref+ fields allow you to enter values way outside the electrical specs of the device, it does create a note that this is the case, but I would have expected a proper warning for something dangerous like this, at first I did not even notice this as it was green and just a note. Lastly when I select to use the FVR _buf2 as the positive reference for the DAC the screen no longer makes any sense. Now I can only edit Vdd (which is 5v) but my Vref is set to 2.048V via the 2x buffer on the FVR, but the Vref+ field is read-only so I cannot update it. MCC is however using the correct input reference of 2.048 as can be seen by the output being limited to 1.984V even when I require 3V. Required Boilerplate: Component Version Device PIC18F74K40 MCC 3.75 MPLAB-X 5.10 Foundation Services 0.1.31 PIC18 Lib 1.76.0
  2. Awesome, the forum will give you more permission to post files and images after you have made at least 3 posts.
  3. Exactly how long is a nanosecond? This Lore blog is all about standing on the shoulders of giants. Back in February 1944 IBM shipped the Harvard Mark 1 to Harvard University. It looked like this: The Mark I was a remarkable machine at the time, it could perform addition in 1 cycle (which took roughly 0.3 seconds) and multiplication in 20 cycles or 6 seconds. Calculating sin(x) would run up to 60 seconds (1 minute). The team that ran this Electromechanical computer had on it a remarkable young giant, Grace Brewster Murray Hopper, who went on to become a Rear Admiral in the US Navy before her retirement 43 odd years later. During her career as one of the first and finest computer scientists she was involved in a myriad of innovation. In 1949 she proposed the concept of creating a human readable language made up entirely of English words to program a computer. Her idea was readily dismissed because "computers do not understand English". It was only 3 years later that her idea gained traction when she published a paper entitled "The education of a computer" on the subject. She called this a "compiler". In 1959 she was part of the team commissioned to develop a new programming language for business use. It was called COBOL. But enough of the introductions already! If we had to go over everything Grace Hopper accomplished we would be here all day - here is the Wikipedia page on Grace Hopper for the uninitiated who do not know who this remarkable woman was. I have never seen a better explanation of orders of magnitude and scale than the representation Grace Hopper was known for explaining how long a nanosecond really is. Here is a video from the archives of her explaining this brilliantly in layman's terms in just under 2 minutes. I also love the interview Letterman did with her back in 1986 where she explained this briefly on national television. For television she stepped up her game a notch and took the nanosecond to the next level, also explaining through an analogy how long a Pico-second is! This 10 minute interview really goes a long way in capturing the essence of who Grace Hopper really was, a remarkable pioneer in Engineering indeed!
  4. Here you go - I have compiled this with the +NVMREG option, and confirmed that this writes the appropriate bits before doing initialization of the function pointers through the TBLRD instruction. KM1, I will leave this here for you to test. Uploading the project zip as well as the HEX I compiled. I did find a host of bugs in the RTCOUNTER when you are using TIMER0 in 8-bit mode! It is clear this combination needs some work. 1. The left shift in readTimer was still <<16 even if the timer was 8-bit causing all kinds of havoc 2. The definition of the global variable g_rtcounterH was 16-bit limiting us to 24bit range, but RTCOUNTER_CONCATENATE_TIMER_TICKS was defined specifying a 16-bit range so the math was all wonky I set the timer to have a tick of 1us which makes the math easy, and then got 1s by setting the timer reschedule value to 1,000,000, which should make the output toggle every 1s, or come on every 2s ... For the blog post we used TIMER1 and for that implementation everything seemed to be correct. Also I did not run it long enough to rebase (wrap) so that may also have issues, that should happen after 4000 seconds, which is longer than it sounds ... xpress1.zip
  5. Oh wow I did not realize this was implemented as an errata! That would by far be the best option! I am going to try it out and upload the project here, then KM1 should be able to test when he gets back!
  6. @Prasad, that pointer cannot be null according to the C code, it is initialized at startup. I think you just found the problem! Good detective work! Of course if you check for null there and it is indeed null then the device will not reset but also the callback will never be called so the problem is not yet solved... Letโ€™s try out that errata, I think that will fix it. If not letโ€™s keep investigating how it is possible for this pointer to be null. It is either not initialized (the errata could cause this) or it is overwritten woth null at some point. Which one is it?
  7. The full errata can be downloaded here : http://ww1.microchip.com/downloads/en/DeviceDoc/PIC18F27-47K40-Silicon-Errata-and-Data-Sheet-Clarification-80000713E.pdf If this is the problem then this is quite a serious problem as everyone using C with those boards will be caught by this! It would mean that they ran a production batch of those boards with rev A2 silicon! C
  8. Best guess right now is that you are hitting this Silicon bug. Please see if you can attempt this workaround.
  9. Hmmm, that is very interesting. If it resets on the hardware due to a MCLR reset, and not in the simulator it is probably nothing to do with the software!
  10. Hey Prasad, that should use almost no stack space? Very strange. Can you check the reset bits to see if it really is a stack overflow?
  11. I have to add here that the single thing which trips most people up on RTCOUNTER is the timer period. The timer MUST be free-running, which mean that you can NOT use MCC to set a timer period by using a reload or PR value, the timer HAS to be set to maximum period and the only thing you can modify is the prescaler for the timer to adjust the period. I know this is limiting, but it allows us to have a pure monotonic upwards counting number which we can concatenate with our sw variable to construct the timer. Remember that we want as few interrupts as possible, so running the timer for MAX period is exactly what we want to do here! This library does not have an event when the timer gets an interrupt or overflow (that is what the TIMEOUT driver does if you want that, but there is quite a price for this). When you call the check function it will simply compare the current timer register with the absolute time variable for the next timer to expire, it is a quick compare, so pretty cheap to do this.
  12. Orunmila


    The basic idea behind this principle is that taking measurements influence the thing you are measuring. In microcontrollers we all get intruduced to this idea at some point when you are trying to figure out if the oscillator is running, and measuring it with a scope probe you realize - of course only after a couple of hours of struggling - that the 10pF impedence load the scope probe is adding to the pin is actually causing the oscillator, which was working just fine, to stop dead. In software we have similar problems. First we find out we cannot set breakpoints because the optimizer has mixed the instructions so much that they no longer correlate to the C instructions. We are advised to disable optimization, so that we can set a breakpoint to inspect what is going on, only to discover that when optimizations are disabled the problem is gone. We think we are smart and add printf's to the code, side-stepping the optimization problem entirely, only to end up with working code once again, and the moment we remove the prints it breaks! It is almost like the bug disappears the moment you start looking at it ... These "Heisenbugs" are usually an indication that there is some kind of concurrency problem or race condition, where the timing of the code is a critical part of the problem being exposed. The oldest reference to a "Heisenbug" is a 1983 paper in the ACM. As I often do, here is the link to Heisenbug on Wikipedia.
  13. Ok now I am quite stumped. Is the time period there around 1ms per pulse? Is it possible for you to single step (instruction step) the code in the debugger and see when the line toggles?
  14. To confirm above I changed your code to use TMR0 in 8-bit mode so that I can test it in the simulator. With the prescaler I set the period to 1.024ms, so it is not quite perfect, but close enough for a test (2.4% error). I attach the project with these settings, this works fine in the simulator. xpress1.zip
  15. Oh my, I have run it in the simulator and this is definitely broken. Let me debug it and I will get back to you with a clear answer on how to fix this and what is going on. EDIT: OK. RTCOUNTER is designed for the timer to represent the bottom bits of a concatenated timer value. This ONLY works if the timer runs from 0 and overflows at FFFF. This means that you cannot set the timer period to 1ms the way that you did there, you can ONLY set the timer period to MAXIMUM which means the timer will run from 0x0000 to 0xFFFF, forming the bottom 16 bits of the concatenated timer. I cannot test the code with TMR0 in the simulator as the simulator has a bug which does not allow TMR0 to work in 16-bit mode, and I do not have a 47K40 on me to test it on the actual hardware ๐Ÿ˜ž What you should see is this: 1. TMR0L and TMR0H should count upwards from 0x0000. 2. When you call create timer it should take the current value of (TMR0H << 8 + TMR0L) and add 1000 to this, that should give you something like 1222 the first cycle. 3. At this point every time you call rtcount_callNextCallback() it should compare the current timer value obtained by ReadTimer() to 1222, it should do nothing until the timer exceeds 1222, then it should call the callback once and add 1000 to the timer absoluteltimeout, which should now be 2222, and the cycle should repeat. Please check if this is what you see on the hardware, we can fix the period when this is working correctly. This should have a period of 8ms per tick, so 1000 should go about 8 seconds ... What happened in your case in 16-bit mode with the period set to 1ms is that the rtcount_callNextCallback() routine ALWAYS thought that there was a timeout as it sets the timeout to be at something like 222, but because you reload the timer with a value much larger than this the routine always thinks the timer has timed out and it toggles the i/o pin on every pass through the while(1)
  16. Got it I am taking a look for you. BTW. You can also upload directly here by attaching to your post ๐Ÿ™‚
  17. Can you please post some code? Or even better the entire project (you can turn your project in to a zip file using the "Package" option when you right-click on the project name in MPLAB-X.) I am not sure I understand the problem yet.
  18. Structures in the C Programming Language Structures in C is one of the most misunderstood concepts. We see a lot of questions about the use of structs, often simply about the syntax and portability. I want to explore both of these and look at some best practice use of structures in this post as well as some lesser known facts. Covering it all will be pretty long so I will start off with the basics, the syntax and some examples, then I will move on to some more advanced stuff. If you are an expert who came here for some more advanced material please jump ahead using the links supplied. Throughout I will refer to the C99 ANSI C standard often, which can be downloaded from the link in the references. If you are not using a C99 compiler some things like designated initializers may not be available. I will try to point out where something is not available in older complilers that only support C89 (also known as C90). C99 is supported in XC8 from v2.0 onwards. Advanced topics handled lower down Scope Designated Initializers Declaring Volatile and Const Bit-Fields Padding and Packing of structs and Alignment Deep and Shallow copy of structures Comparing Structs Basics A structure is a compound type in C which is known as an "aggregate type". Structures allows us to use sets of variables together like a single aggregate object. This allows us to pass groups of variables into functions, assign groups of variables to a destination location as a single statement and so forth. Structures are also very useful when serializing or de-serializing data over communication ports. If you are receiving a complex packet of data it is often possible to define a structure specifying the layout of the variables e.g. the IP protocol header structure, which allows more natural access to the members of the structure. Lastly structures can be used to create register maps, where a structure is aligned with CPU registers in such a way that you can access the registers through the corresponding structure members. The C language has only 2 aggregate types namely structures and arrays. A union is notably not considered an aggregate type as it can only have one member object (overlapping objects are not counted separately). [Section "6.5.2 Types" of C99] Syntax The basic syntax for defining a structure follows this pattern. struct [structure tag] { member definition; member definition; ... } [one or more structure variables]; As indicated by the square brackets both the structure tag (or name) and the structure variables are optional. This means that I can define a structure without giving it a name. You can also just define the layout of a structure without allocating any space for it at the same time. What is important to note here is that if you are going to use a structure type throughout your code the structure should be defined in a header file and the structure definition should then NOT include any variable definitions. If you do include the structure variable definition part in your header file this will result in a different variable with an identical name being created every time the header file is included! This kind of mistake is often masked by the fact that the compiler will co-locate these variables, but this kind of behavior can cause really hard to find bugs in your code, so never do that! Declare the layout of your structures in a header file and then create the instances of your variables in the C file they belong to. Use extern definitions if you want a variable to be accessible from multiple C files as usual. Let's look at some examples. Example1 - Declare an anonymous structure (no tag name) containing 2 integers, and create one instance of it. This means allocate storage space in RAM for one instance of this structure on the stack. struct { int i; int j; } myVariableName; This structure type does not have a name, so it is an anonymous struct, but we can access the variables via the variable name which is supplied. The structure type may not have a name but the variable does. When you declare a struct like this it is not possible to declare a function which will accept this type of structure by name. Example 2 - Declare a type of structure which we will use later in our code. Do not allocate any space for it. struct myStruct { int i; int j; }; If we declare a structure like this we can create instances or define variables of the struct type at a later stage as follows. (According to the standard "A declaration specifies the interpretation and attributes of a set of identifiers. A definition of an identifier is a declaration for that identifier that causes storage to be reserved for that object" - 6.7) struct myStruct myVariable1; struct myStruct myVariable2; Example 3 - Declare a type of structure and define a type for this struct. typedef struct myStruct { int i; int j; } myStruct_t; // Not to be confused by a variable declaration // typedef changes the syntax here - myStruct_t is part of the typedef, NOT the struct definition! // This is of course equivalent to struct myStruct { int i; int j; }; // Now if you placed a name here it would allocate a variable typedef struct myStruct myStruct_t; The distinction here is a constant source of confusion for developers, and this is one of many reasons why using typedef with structs is NOT ADVISED. I have added in the references a link to some archived conversations which appeared on usenet back in 2002. In these messages Linus Torvalds explains much better than I can why it is generally a very bad idea to use typedef with every struct you declare as has become a norm for so many programmers today. Don't be like them! In short typedef is used to achieve type abstraction in C, this means that the owner of a library can at a later time change the underlying type without telling users about it and everything will still work the same way. But if you are not using the typedef exactly for this purpose you end up abstracting, or hiding, something very important about the type. If you create a structure it is almost always better for the consumer to know that they are dealing with a structure and as such it is not safe to to comparisons like == to the struct and it is also not safe to copy the struct using = due to deep copy problems (later on I describe these). By letting the user of your structs know explicitly they are using structs when they are you will avoid a lot of really hard to track down bugs in the future. Listen to the experts! This all means that the BEST PRACTICE way to use structs is as follows, Example 4- How to declare a structure, instantiate a variable of this type and pass it into a function. This is the BEST PRACTICE way. struct point { // Declare a cartesian point data type int x; int y; }; void pointProcessor(struct point p) // Declare a function which takes struct point as parameter by value { int temp = p.x; ... // and the rest } void main(void) { // local variables struct point myPoint = {3,2}; // Allocate a point variable and initialize it at declaration. pointProcessor(myPoint); } As you can see we declare the struct and it is clear that we are defining a new structure which represents a point. Because we are using the structure correctly it is not necessary to call this point_struct or point_t because when we use the structure later it will be accompanied by the struct keyword which will make its nature perfectly clear every time it is used. When we use the struct as a parameter to a function we explicitly state that this is a struct being passed, this acts as a caution to the developers who see this that deep/shallow copies may be a problem here and need to be considered when modifying the struct or copying it. We also explicitly state this when a variable is declared, because when we allocate storage is the best time to consider structure members that are arrays or pointers to characters or something similar which we will discuss later under deep/shallow copies and also comparisons and assignments. Note that this example passes the structure to the function "By Value" which means that a copy of the entire structure is made on the parameter stack and this is passed into the function, so changing the parameter inside of the function will not affect the variable you are passing in, you will be changing only the temporary copy. Example 5 - HOW NOT TO DO IT! You will see lots of examples on the web to do it this way, it is not best practice, please do not do it this way! // This is an example of how NOT to do it // This does the same as example 4 above, but doing it this way abstracts the type in a bad way // This is what Linus Torvalds warns us against! typedef struct point_tag { // Declare a cartesian point data type int x; int y; } point_t; void pointProcessor(point_t p) { int temp = p.x; ... // and the rest } void main(void) { // local variables point_t myPoint = {3,2}; // Allocate a point variable and initialize it at declaration. pointProcessor(myPoint); } Of course now the tag name of the struct has no purpose as the only thing we ever use it for is to declare yet another type with another name, this is a source of endless confusion to new C programmers as you can imagine! The mistake here is that the typedef is used to hide the nature of the variable. Initializers As you saw above it is possible to assign initial values to the members of a struct at the time of definition of your variable. There are some interesting rules related to initializer lists which are worth pointing out. The standard requires that initializers be applied in the order that they are supplied, and that all members for which no initializer is supplied shall be initialized to 0. This applies to all aggregate types. This is all covered in the standard section 6.7.8. I will show a couple of examples to clear up common misconceptions here. Desctiptions are all in the comments. struct point { int x; int y; }; void function(void) { int myArray1[5]; // This array has random values because there is no initializer int myArray2[5] = { 0 }; // Has all its members initialized to 0 int myArray3[5] = { 5 }; // Has first element initialized to 5, all other elements to 0 int myArray3[5] = { }; // Has all its members initialized to 0 struct point p1; // x and y both indeterminate (random) values struct point p2 = {1, 2}; // x = 1 and y = 2 struct point p3 = { 1 }; // x = 1 and y = 0; // Code follows here } These rules about initializers are important when you decide in which order to declare your members of your structures. We saw a great example of how user interfaces can be simplified by placing members to be initialized to 0 at the end of the list of structure members when we looked at the examples of how to use RTCOUNTER in another blog post. More details on Initializers such as designated initializers and variable length arrays, which were introduced in C99, are discussed in the advanced section below. Assignment Structures can be assigned to a target variable just the same as any other variable. The result is the same as if you used the assignment operator on each member of the structure individually. In fact one of the enhancements of the "Enhanced Midrange" code in all PIC16F1xxx devices is the capability to do shallow copies of structures faster thought specialized instructions! struct point { // Declare a cartesian point data type int x; int y; }; void main(void) { struct point p1 = {4,2}; // p1 initialized though an initializer-list struct point p2 = p1; // p2 is initialized through assignment // At this point p2.x is equal to p1.x and so is p2.y equal to p1.y struct point p3; p3 = p2; // And now all three points have the same value } Be careful though, if your structure contains external references such as pointers you can get into trouble as explained later under Deep and Shallow copy of structures. Basic Limitations Before we move on to advanced topics. As you may have suspected there are some limitations to how much of each thing you can have in C. The C standard calls these limits Translation Limits. They are a requirement of the C standard specifying what the minimum capabilities of a compiler has to be to call itself compliant with the standard. This ensures that your code will compile on all compliant compilers as long as you do not exceed these limits. The Translation Limits applicable to structures are: External identifiers must use at most 31 significant characters. This means structure names or members of structures should not exceed 31 unique characters. At most 1023 members in a struct or union At most 63 levels of nested structure or union definitions in a single struct-declaration-list Advanced Topics Scope Structure, union, and enumeration tags have scope that begins just after the appearance of the tag in a type specifier that declares the tag. When you use typedef's however the type name only has scope after the type declaration is complete. This makes it tricky to define a structure which refers to itself when you use typedef's to define the type, something which is important to do if you want to construct something like a linked list. I regularly see people tripping themselves up with this because they are using the BAD way of using typedef's. Just one more reason not to do that! Here is an example. // Perfectly fine declaration which compiles as myList has scope inside the curly braces struct myList { struct myList* next; }; // This DOES NOT COMPILE ! // The reason is that myList_t only has scope after the curly brace when the type name is supplied. typedef struct myList { myList_t* next; } myList_t; As you can see above we can easily refer a member of the structure to a pointer of the structure itself when you stay away from typedef's, but how do you handle the more complex case of two separate structures referring to each other? In order to solve that one we have to make use of incomplete struct types. Below is an example of how this looks in practice. struct a; // Incomplete declaration of a struct b; // Incomplete declaration of b struct a { // Completing the declaration of a with member pointing to still incomplete b struct b * myB; }; struct b { // Completing the declaration of b with member pointing to now complete a struct a * myA; }; This is an interesting example from the standard on how scope is resolved. Designated Initializers (introduced in C99) Example 4 above used initializer-lists to initialize the members of our structure, but we were only able to omit members at the end, which limited us quite severely. If we could omit any member from the list, or rather include members by designation, we could supply the initializers we need and let the rest be set safely to 0. This was introduced in C99. This addition had a bigger impact on Unions however. There is a rule in a union which states that initializer-lists shall be applied solely to the first member of the union. It is easy to see why this was necessary, since the members of a each struct which comprizes a union do not have to be the same number of members, it would be impossible to apply a list of constants to an arbitraty member of the union. In many cases this means that designated initializers are the only way that unions can be initialized consistently. Examples with structs. struct multi { int x; int y; int a; int b; }; struct multi myVar = {.a = 5, .b = 3}; // Initialize the struct to { 0, 0, 5, 3 } Examples with a Union. struct point { int x; int y; }; struct circle { struct point center; int radius; }; struct line { struct point start; struct point end; }; union shape { struct circle mCircle; struct line mLine; }; void main(void) { volatile union shape shape1 = {.mLine = {{1,2}, {3,4}}}; // Initialize the union using the line member volatile union shape shape2 = {.mCircle = {{1,2}, 10}}; // Initialize the union using the circle member ... } The type of initialization of a union using the second member of the union was not possible before C99, which also means if you are trying to port C99 code to a C89 compiler this will require you to write initializer functions which are functionally different and your port may end up not working as expected. Initializers with designations can be combined with compound literals. Structure objects created using compound literals can be passed to functions without depending on member order. Here is an example. struct point { int x; int y; }; // Passing 2 anonymous structs into a function without declaring local variables drawline( (struct point){.x=1, .y=1}, (struct point){.y=3, .x=4}); Volatile and Const Structure declarations When declaring structures it is often necessary for us to make the structure volatile, this is especially important if you are going to overlay the structure onto registers (a register map) of the microprocessor. It is important to understand what happens to the members of the structure in terms of volatility depending on how we declare it. This is best explained using the examples from the C99 standard. struct s { // Struct declaration int i; const int ci; }; // Definitions struct s s; const struct s cs; volatile struct s vs; // The various members have the types: s.i // int s.ci // const int cs.i // const int cs.ci // const int vs.i // volatile int vs.ci // volatile const int Bit Fields It is possible to include in the declaration of a structure how many bits each member should occupy. This is known as "Bit Fields". It can be tricky to write portable code using bit-fields if you are not aware of their limitations. Firstly the standard states that "A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type." Further to this it also statest that "As specified in 6.7.2 above, if the actual type specifier used is int or a typedef-name defined as int, then it is implementation-defined whether the bit-field is signed or unsigned." This means effectively that unless you use _Bool or unsigned int your structure is not guaranteed to be portable to other compilers or platforms. The recommended way to declare portable and robust bitfields is as follows. struct bitFields { unsigned enable : 1; unsigned count : 3; unsigned mode : 4; }; When you use any of the members in an expression they will be promoted to a full sized unsigned int during the expression evaluation. When assigning back to the members values will be truncated to the allocated size. It is possible to use anonymous bitfields to pad out your structure so you do not need to use dummy names in a struct if you build a register map with some unimplemented bits. That would look like this: struct bitFields { unsigned int enable : 1; unsigned : 3; unsigned int mode : 4; }; This declares a variable which is at least 8 bits in size and has 3 padding bits between the members "enable" and "mode". The caveat here is that the standard does not specify how the bits have to be packed into the structure, and different systems do in fact pack bits in different orders (e.g. some may pack from LSB while others will pack from MSB first). This means that you should not rely on the postion of specific position of bits in your struct being in specific locations. All you can rely on is that in 2 structs of the same type the bits will be packed in corresponding locations. When you are dealing with communication systems and sending structures containing bitfields over the wire you may get a nasty surprize if bits are in a different order on the receiver side. And this also brings us to the next possible inconsitency - packing. This means that for all the syntactic sugar offered by bitfields, it is still more portable to use shifting and masking. By doing so you can select exactly where each bit will be packed, and on most compilers this will result in the same amount of code as using bitfields. Padding, Packing and Alignment This is going to be less applicable on a PIC16, but if you write portable code or work with larger processors this becomes very important. Typically padding will happen when you declare a structure that has members which are smaller than the fastest addressible unit of the processor. The standard allows the compiler to place padding, or unused space, in between your structure members to give you the fastest access in exchange for using more RAM. This is called "Alignment". On embedded applications RAM is usually in short supply so this is an important consideration. You will see e.g. on a 32-bit processor that the size of structures will increment in multiples of 4. The following example shows the definition of some structures and their sizes on a 32-bit processor (my i7 in this case running macOS). And yes it is a 64 bit machine but I am compiling for 32-bit here. // This struct will likely result in sizeof(iAmPadded) == 12 struct iAmPadded { char c; int i; char c2; } // This struct results in sizeof(iAmPadded) == 8 (on GCC on my i7 Mac) or it could be 12 depending on the compiler used. struct iAmPadded { char c; char c2; int i; } Many compilers/linkers will have settings with regards to "Packing" which can either be set globally. Packing will instruct the compiler to avoid padding in between the members of a structure if possible and can save a lot of memory. It is also critical to understand packing and padding if you are making register overlays or constructing packets to be sent over communication ports. If you are using GCC packing is going to look like this: // This struct on gcc on a 32-bit machine has sizeof(struct iAmPadded) == 6 struct __attribute__((__packed__)) iAmPadded { char c; int i; char c2; } // OR this has the same effect for GCC #pragma pack(1) struct __attribute__((__packed__)) iAmPadded { char c; int i; char c2; } If you are writing code on e.g. an AVR which uses GCC and you want to use the same library on your PIC32 or your Cortex-M0 32-bitter then you can instruct the compiler to pack your structures like this and save loads of RAM. Note that taking the address of structure members may result in problems on architectures which are not byte-addressible such as a SPARC. Also it is not allowed to take the address of a bitfield inside of a structure. One last note on the use of the sizeof operator. When applied to an operand that has structure or union type, the result is the total number of bytes in such an object, including internal and trailing padding. Deep and Shallow copy Another one of those areas where we see countless bugs. Making structures with standard integer and float types does not suffer from this problem, but when you start using pointers in your structures this can turn into a problem real fast. Generally it is perfectly fine to create copies of structures by passing them into functions or using the assignement operator "=". Example struct point { int a; int b; }; void function(void) { struct point point1 = {1,2}; struct point point2; point2 = point1; // This will copy all the members of point1 into point2 } Similarly when we call a function and pass in a struct a copy of the structure will be made into the parameter stack in the same way. When the structure however contains a pointer we must be careful because the process will copy the address stored in the pointer but not the data which the pointer is pointing to. When this happens you end up with 2 structures containing pointers pointing to the same data, which can cause some very strange behavior and hard to track down bugs. Such a copy, where only the pointers are copied is called a "shallow copy" of the structure. The alternative is to allocate memory for members being pointed to by the structure and create what is called a "deep copy" of the structure which is the safe way to do it. We probably see this with strings more often than with any type of pointer e.g. struct person { char* firstName; char* lastName; } // Function to read person name from serial port void getPerson(struct person* p); void f(void) { struct person myClient = {"John", "Doe"}; // The structure now points to the constant strings // Read the person data getPerson(&myClient) } // The intention of this function is to read 2 strings and assign the names of the struct person void getPerson(struct person* p) { char first[32]; char last[32]; Uart1_Read(first, 32); Uart1_Read(last, 32); p.firstName = first; p.lastName = last; } // The problem with this code is that it is easy for to look like it works. The probelm with this code is that it will very likely pass most tests you throw at it, but it is tragically broken. The 2 buffers, first and last, are allocated on the stack and when the function returns the memory is freed, but still contains the data you received. Until another function is called AND this function allocates auto variables on the stack the memory will reamain intact. This means at some later stage the structure will become invalid and you will not be able to understand how, if you call the function twice you will later find that both variables you passed in contain the same names. Always double check and be mindful where the pointers are pointing and what the lifetime of the memory allocated is. Be particularly careful with memory on the stack which is always short-lived. For a deep copy you would have to allocate new memory for the members of the structure that are pointers and copy the data from the source structure to the destination structure manually. Be particularly careful when structures are passed into a function by value as this makes a copy of the structure which points to the same data, so in this case if you re-allocate the pointers you are updating the copy and not the source structure! For this reason it is best to always pass structures by reference (function should take a pointer to a structure) and not by value. Besides if data is worth placing in a structure it is probably going to be bigger than a single pointer and passing the structure by reference would probably be much more efficient! Comparing Structs Although it is possible to asign structs using "=" it is NOT possible to compare structs using "==". The most common solution people go for is to use memcmp with sizeof(struct) to try and do the comparison. This is however not a safe way to compare structures and can lead to really hard to track down bugs! The problem is that structures can have padding as described above, and when structures are copied or initialized there is no obligation on the compiler to set the values of the locations that are just padding, so it is not safe to use memcmp to compare structures. Even if you use packing the structure may still have trailing padding after the data to meet alignment requirements. The only time using memcmp is going to be safe is if you used memset or calloc to clear out all of the memory yourself, but always be mindful of this caveat. Conclusion Structs are an important part of the C language and a powerful feature, but it is important that you ensure you fully understand all the intricacies involved in structs. There is as always a lot of bad advice and bad code out there in the wild wild west known as the internet so be careful when you find code in the wild, and just don't use typdef on your structs! References As always the WikiPedia page is a good resource Link to a PDF of the comittee draft which was later approved as the C99 standard Linus Torvalds explains why you should not use typedef everywhere for structs Good write-up on packing of structures Bit Fields discussed on Hackaday
  19. Interesting, I remember doing a bunch of products using 16C74, I think those are also still in use :). Which makes me think, we could easily half the price of the microcontroller on those designs by changing to a newer model, but it is just not worth the cost and especially not worth the risk ... When selecting a microcontroller, how important is price to you guys? For me I look for the best match to my requirements first and then if I have more than one solution price is for sure the tie breaker, but I have not once selected a worse match on features to reduce the price (maybe I have just been lucky?). I have to add that the BOM cost of most of these designs were significantly higher than just the microcontroller, at least one of them included a GSM module at $30, which makes the $0.50 for the micro fairly insignificant ... What do you guys think?
  20. Epigrams on Programming Alan J. Perlis Yale University This text has been published in SIGPLAN Notices Vol. 17, No. 9, September 1982, pages 7 - 13. The phenomena surrounding computers are diverse and yield a surprisingly rich base for launching metaphors at individual and group activities. Conversely, classical human endeavors provide an inexhaustible source of metaphor for those of us who are in labor within computation. Such relationships between society and device are not new, but the incredible growth of the computer's influence (both real and implied) lends this symbiotic dependency a vitality like a gangly youth growing out of his clothes within an endless puberty. The epigrams that follow attempt to capture some of the dimensions of this traffic in imagery that sharpens, focuses, clarifies, enlarges and beclouds our view of this most remarkable of all mans' artifacts, the computer. One man's constant is another man's variable. Functions delay binding: data structures induce binding. Moral: Structure data late in the programming process. Syntactic sugar causes cancer of the semi-colons. Every program is a part of some other program and rarely fits. If a program manipulates a large amount of data, it does so in a small number of ways. Symmetry is a complexity reducing concept (co-routines include sub-routines); seek it everywhere. It is easier to write an incorrect program than understand a correct one. A programming language is low level when its programs require attention to the irrelevant. It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures. Get into a rut early: Do the same processes the same way. Accumulate idioms. Standardize. The only difference (!) between Shakespeare and you was the size of his idiom list - not the size of his vocabulary. If you have a procedure with 10 parameters, you probably missed some. Recursion is the root of computation since it trades description for time. If two people write exactly the same program, each should be put in micro-code and then they certainly won't be the same. In the long run every program becomes rococo - then rubble. Everything should be built top-down, except the first time. Every program has (at least) two purposes: the one for which it was written and another for which it wasn't. If a listener nods his head when you're explaining your program, wake him up. A program without a loop and a structured variable isn't worth writing. A language that doesn't affect the way you think about programming, is not worth knowing. Wherever there is modularity there is the potential for misunderstanding: Hiding information implies a need to check communication. Optimization hinders evolution. A good system can't have a weak command language. To understand a program you must become both the machine and the program. Perhaps if we wrote programs from childhood on, as adults we'd be able to read them. One can only display complex information in the mind. Like seeing, movement or flow or alteration of view is more important than the static picture, no matter how lovely. There will always be things we wish to say in our programs that in all known languages can only be said poorly. Once you understand how to write a program get someone else to write it. Around computers it is difficult to find the correct unit of time to measure progress. Some cathedrals took a century to complete. Can you imagine the grandeur and scope of a program that would take as long? For systems, the analogue of a face-lift is to add to the control graph an edge that creates a cycle, not just an additional node. In programming, everything we do is a special case of something more general - and often we know it too quickly. Simplicity does not precede complexity, but follows it. Programmers are not to be measured by their ingenuity and their logic but by the completeness of their case analysis. The 11th commandment was "Thou Shalt Compute" or "Thou Shalt Not Compute" - I forget which. The string is a stark data structure and everywhere it is passed there is much duplication of process. It is a perfect vehicle for hiding information. Everyone can be taught to sculpt: Michelangelo would have had to be taught how not to. So it is with the great programmers. The use of a program to prove the 4-color theorem will not change mathematics - it merely demonstrates that the theorem, a challenge for a century, is probably not important to mathematics. The most important computer is the one that rages in our skulls and ever seeks that satisfactory external emulator. The standardization of real computers would be a disaster - and so it probably won't happen. Structured Programming supports the law of the excluded muddle. Re graphics: A picture is worth 10K words - but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures. There are two ways to write error-free programs; only the third one works. Some programming languages manage to absorb change, but withstand progress. You can measure a programmer's perspective by noting his attitude on the continuing vitality of FORTRAN. In software systems it is often the early bird that makes the worm. Sometimes I think the only universal in the computing field is the fetch-execute-cycle. The goal of computation is the emulation of our synthetic abilities, not the understanding of our analytic ones. Like punning, programming is a play on words. As Will Rogers would have said, "There is no such thing as a free variable." The best book on programming for the layman is "Alice in Wonderland"; but that's because it's the best book on anything for the layman. Giving up on assembly language was the apple in our Garden of Eden: Languages whose use squanders machine cycles are sinful. The LISP machine now permits LISP programmers to abandon bra and fig-leaf. When we understand knowledge-based systems, it will be as before - except our finger-tips will have been singed. Bringing computers into the home won't change either one, but may revitalize the corner saloon. Systems have sub-systems and sub-systems have sub-systems and so on ad infinitum - which is why we're always starting over. So many good ideas are never heard from again once they embark in a voyage on the semantic gulf. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy. A LISP programmer knows the value of everything, but the cost of nothing. Software is under a constant tension. Being symbolic it is arbitrarily perfectible; but also it is arbitrarily changeable. It is easier to change the specification to fit the program than vice versa. Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it. In English every word can be verbed. Would that it were so in our programming languages. Dana Scott is the Church of the Lattice-Way Saints. In programming, as in everything else, to be in error is to be reborn. In computing, invariants are ephemeral. When we write programs that "learn", it turns out we do and they don't. Often it is means that justify ends: Goals advance technique and technique survives even when goal structures crumble. Make no mistake about it: Computers process numbers - not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity. Making something variable is easy. Controlling duration of constancy is the trick. Think of all the psychic energy expended in seeking a fundamental distinction between "algorithm" and "program". If we believe in data structures, we must believe in independent (hence simultaneous) processing. For why else would we collect items within a structure? Why do we tolerate languages that give us the one without the other? In a 5 year period we get one superb programming language. Only we can't control when the 5 year period will begin. Over the centuries the Indians developed sign language for communicating phenomena of interest. Programmers from different tribes (FORTRAN, LISP, ALGOL, SNOBOL, etc.) could use one that doesn't require them to carry a blackboard on their ponies. Documentation is like term insurance: It satisfies because almost no one who subscribes to it depends on its benefits. An adequate bootstrap is a contradiction in terms. It is not a language's weaknesses but its strengths that control the gradient of its change: Alas, a language never escapes its embryonic sac. It is possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as soap bubble? Because of its vitality, the computing field is always in desperate need of new cliches: Banality soothes our nerves. It is the user who should parameterize procedures, not their creators. The cybernetic exchange between man, computer and algorithm is like a game of musical chairs: The frantic search for balance always leaves one of the three standing ill at ease. If your computer speaks English it was probably made in Japan. A year spent in artificial intelligence is enough to make one believe in God. Prolonged contact with the computer turns mathematicians into clerks and vice versa. In computing, turning the obvious into the useful is a living definition of the word "frustration". We are on the verge: Today our program proved Fermat's next-to-last theorem! What is the difference between a Turing machine and the modern computer? It's the same as that between Hillary's ascent of Everest and the establishment of a Hilton hotel on its peak. Motto for a research laboratory: What we work on today, others will first think of tomorrow. Though the Chinese should adore APL, it's FORTRAN they put their money on. We kid ourselves if we think that the ratio of procedure to data in an active data-base system can be made arbitrarily small or even kept small. We have the mini and the micro computer. In what semantic niche would the pico computer fall? It is not the computer's fault that Maxwell's equations are not adequate to design the electric motor. One does not learn computing by using a hand calculator, but one can forget arithmetic. Computation has made the tree flower. The computer reminds one of Lon Chaney - it is the machine of a thousand faces. The computer is the ultimate polluter. Its feces are indistinguishable from the food it produces. When someone says "I want a programming language in which I need only say what I wish done," give him a lollipop. Interfaces keep things tidy, but don't accelerate growth: Functions do. Don't have good ideas if you aren't willing to be responsible for them. Computers don't introduce order anywhere as much as they expose opportunities. When a professor insists computer science is X but not Y, have compassion for his graduate students. In computing, the mean time to failure keeps getting shorter. In man-machine symbiosis, it is man who must adjust: The machines can't. We will never run out of things to program as long as there is a single program around. Dealing with failure is easy: Work hard to improve. Success is also easy to handle: You've solved the wrong problem. Work hard to improve. One can't proceed from the informal to the formal by formal means. Purely applicative languages are poorly applicable. The proof of a system's value is its existence. You can't communicate complexity, only an awareness of it. It's difficult to extract sense from strings, but they're the only communication coin we can count on. The debate rages on: Is PL/I Bactrian or Dromedary? Whenever two programmers meet to criticize their programs, both are silent. Think of it! With VLSI we can pack 100 ENIACs in 1 sq.cm. Editing is a rewording activity. Why did the Roman Empire collapse? What is the Latin for office automation? Computer Science is embarrassed by the computer. The only constructive theory connecting neuroscience and psychology will arise from the study of software. Within a computer natural language is unnatural. Most people find the concept of programming obvious, but the doing impossible. You think you know when you learn, are more sure when you can write, even more when you can teach, but certain when you can program. It goes against the grain of modern education to teach children to program. What fun is there in making plans, acquiring discipline in organizing thoughts, devoting attention to detail and learning to be self-critical? If you can imagine a society in which the computer-robot is the only menial, you can imagine anything. Programming is an unnatural act. Adapting old programs to fit new machines usually means adapting new machines to behave like old ones. In seeking the unattainable, simplicity only gets in the way. If there are epigrams, there must be meta-epigrams. Epigrams are interfaces across which appreciation and insight flow. Epigrams parameterize auras. Epigrams are macros, since they are executed at read time. Epigrams crystallize incongruities. Epigrams retrieve deep semantics from a data base that is all procedure. Epigrams scorn detail and make a point: They are a superb high-level documentation. Epigrams are more like vitamins than protein. Epigrams have extremely low entropy. The last epigram? Neither eat nor drink them, snuff epigrams.
  21. Orunmila

    Melbourne Weather

    Those bikes are actually fascinating. Not quite push-bikes:) They have an electric motor, GPS and GSM connectivity built in. You scan a QR-code on the bike to unlock it and you can ride around town with significantly-reduced effort with the motor helping you pedal. I actually took my first ride on one of these just the previous evening when it started snowing, so I am exactly that idiot who does that! I was on my way to the restaurant just 500m down the road and did not bring my watertight jacket, I figured I could get less wet by being exposed for shorter. It cost me a $1 unlock fee and $0.45 for the ride time. I arrived at the restaurant quite dry but entirely out of breath, I totally overdid it, LOL!
  22. Orunmila

    Melbourne Weather

    Weather is getting more crazy every year. We had a big snowstorm over here, schools were closed the last 2 days causing all kinds of chaos with most people staying at home t o take care of their kids. This was the view out my window in the morning yesterday ...
  23. If you are going to be writing any code you can probably use all the help you can get, and in that line you better be aware of the "Ballmer Peak". Legend has it that drinking alcohol impairs your ability to write code, BUT there is a curious peak somewhere in the vicinity of a 0.14 BAC where programmers attain almost super-human programming skill. The XKCD below tries to explain the finer nuances. But seriously many studies have shown that there is some truth to this in the sense that an increase in BAC may remove inhibitions which stifle creativity. It is believed the name really came from Balmer (with one L) series which are sudden peaks in the emmision spectrum of Hydrogen which was discovered in 1885 by Johan Balmer, but since this is about programming that soon got converted to Ballmer after the former CEO of Microsoft who did tend to behave like he was on such a peak on stage from time to time.(see https://www.youtube.com/watch?v=I14b-C67EXY) Either way, we are just going to believe that this one is true - confirmation bias and all, really - just Google it ! ๐Ÿ™‚
  24. Orunmila

    xc8 gcc clang

    One more thing to note! The Clang implementation comes with a new standard library implementation which is not nearly as optimized as the HiTech version, so expect your code to be quite a bit bigger! I think we have seen bloat of at least 10% when we tested this, but of course this depends a whole lot on what you are doing exactly.
  25. Orunmila

    xc8 gcc clang

    XC8 has never used GCC for PICs. XC16 and XC32 are both based on gcc but XC8 was always based (and is still) on the HiTech compiler which Microchip purchased back in 2009. The HiTech compiler had it's own custom front-end and back-end, until v2.x. The most significant change when moving to v2 of XC8 is that the front-end of the compiler was replaced by Clang. This was a great improvement as it brought C99 to the compiler in a much more comprehensive way. The back-end is still the HiTech compiler back-end. For backwards compatability you can still set the compiler to C90 mode in which case it will run the old front-end for you to give you more consistent behavior to what you had in the past, but I would really recommend to everyone to use the shiny new Clang based front-end. On a different note the support for AVR in XC8 is based on GCC again, so you should as you say find gcc in there.
  • Create New...