Jump to content
Β 

Orunmila

Member
  • Content Count

    188
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by Orunmila

  1. The driver is more like a framework for you to inject your own functions to change the behavior. The framework is pretty solid but the injected stuff is not complete (as you have discovered). The way to do that is to add a counter to the code we just did. When you start a transaction set it to X and count down on every retry. If you enable the NAK polling callback that will cause a retry, so you just have to modify that address nak function to retry only a limited number of times and that would be it πŸ™‚
  2. Ok, it is never that easy right :) Couple of small oversights up to now. 1. The do_I2C_SEND_STOP did not clear the busy flag 2. My code I posted above has a bug, doing a post-decrement and checkign for 0 later, which does not work. 3. NAK polling is enabled without any timeout, we need to disable this. After fixing these 3 things it should work as expected. This is how: 1. There is a bug in the driver at do_I2C_SEND_STOP, change that to look like this. It is setting the state to idle here but missing the clearing of the busy flag. static i2c_fsm_states_t do_I2C_SEND_STOP(void) { i2c1_driver_stop(); i2c_status.busy = 0; return I2C_IDLE; } 2. There is a bug in the code I posted above, I was doing to--, and after that checking for 0, but the post-decrement will reduce to to -1 by the time I was checking, so that should be like the following, main difference being : inline int mssp1_waitForEvent(uint16_t *timeout) { uint16_t to = 50000; // This will cause a 50ms timeout on I2C if(PIR3bits.SSP1IF == 0) { while(--to) { if(PIR3bits.SSP1IF) break; __delay_us(1); // We reduce this to 1us, the while loop will add some delay so the timeout will be at least 50ms } if (to == 0) { return -1; } } return 0; } 3. At this point the driver will operate in the default mode which is NAK Polling, this means that every time we get a NAK the driver will retry, assuming that the device we are talking to is busy, and we need to retry when a NAK happens. This is activated by setting the addressNAK callback to do a restart-write. We do not want this for your case, we want to fail and return when there is no answer. We can do this by commenting out the line in i2c_simple_master which enables this as follows: void i2c_write1ByteRegister(i2c_address_t address, uint8_t reg, uint8_t data) { while(!i2c_open(address)); // sit here until we get the bus.. i2c_setDataCompleteCallback(wr1RegCompleteHandler,&data); i2c_setBuffer(&reg,1); //i2c_setAddressNACKCallback(i2c_restartWrite,NULL); //NACK polling? i2c_masterWrite(); while(I2C_BUSY == i2c_close()); // sit here until finished. } Once you have done these three things the driver should do what you need, it will go back to idle after an address NAK with i2c_status.error set to I2C_FAIL.
  3. So there seems to be a couple of bugs here, firstly that it is not realizing that the address was a NAK, that is pretty standard and should work correctly, but after that it switches the command from write to read, which is also not correct. I would expect it to retry, but it does not make sense to retry using read if the initial attempt was to write to the device... I don't know how soon I will have time to debug this so here are some tips for you on how the driver works. The state machine and all decisions happen in i2c_masterOperation which is in i2c_master.c. In that function it will check if you are using the interrupt driven driver, in which case it will just return and let the state change happen in the IRQ, else it will run the poller, which will poll for the change in state and run the interrupt if the state change has happened. Essentially the poller is just polling the interrupt flag SSP1IF and calling the ISR when this is set. So there are 2 possible ways this could be going wrong. Case 1 is if the IF is never set, you can check this in the debugger but from what you describe it is not timing out which means that the IF seems to be set when the NAK happens. The other option is that NAK check is disabled or not successfully checked in the ISR. You should check here : void i2c_ISR(void) { mssp1_clearIRQ(); // NOTE: We are ignoring the Write Collision flag. // the write collision is when SSPBUF is written prematurely (2x in a row without sending) // NACK After address override Exception handler if(i2c_status.addressNACKCheck && i2c1_driver_isNACK()) { i2c_status.state = I2C_ADDRESS_NACK; // State Override } i2c_status.state = fsmStateTable[i2c_status.state](); } That if check there should be true. From what I can see, if you are not seeing a timeout, you must be seeing this ISR function being called from the poller, and this must set the state to I2C_ADDRESS_NACK, if this does not happen we can investigate if it is because the check is disabled or the ACKSTAT register has the wrong value. If it goes in there and the state is set the next place to look is where the NAK is processed, which should be here: // TODO: probably need 2 addressNACK's one from read and one from write. // the do NACK before RESTART or STOP is a special case that a new state simplifies. static i2c_fsm_states_t do_I2C_DO_ADDRESS_NACK(void) { i2c_status.addressNACKCheck = 0; i2c_status.error = I2C_FAIL; switch(i2c_status.callbackTable[i2c_addressNACK](i2c_status.callbackPayload[i2c_addressNACK])) { case i2c_restart_read: case i2c_restart_write: return do_I2C_SEND_RESTART(); default: return do_I2C_SEND_STOP(); } }
  4. Yes that is what I meant when I said "I should actually check why you are not getting one". I would expect that you would send out the address and receive back a NAK when there is nothing connected to the bus. It would be important that you do have the pull-up resistors on the bus though. Looks like I am going to have to crack out that hardware and look at the signals on my Saleae to see what exactly is going on. I left my Saleae at the office so I can only do that tomorrow. Stand by and I will let you know what I find.
  5. Ok this is slightly more complex because the status is one layer up so you have to pass the fact that there was a timeout up one layer, but that function does not return anything, so you have to change that. 3 easy steps. Start in i2c_driver.h and change the prototype of the function like this: INLINE int mssp1_waitForEvent(uint16_t*); Change the mssp code to this: inline int mssp1_waitForEvent(uint16_t *timeout) { uint16_t to = 50000; // This will cause a 50ms timeout on I2C if(PIR3bits.SSP1IF == 0) { while(to--) { if(PIR3bits.SSP1IF) break; __delay_us(1); // We reduce this to 1us, the while loop will add some delay so the timeout will be at least 50ms } if (to == 0) { return -1; } } return 0; } And then lastly catch the error in i2c_master.c inline void i2c_poller(void) { while(i2c_status.busy) { if (mssp1_waitForEvent(NULL) == -1) { i2c_status.state = I2C_ADDRESS_NACK; // State Override for timeout case } i2c_ISR(); } } So what this does is pass up -1 when there is a timeout, which will then advance the state machine (which happens in i2c_ISR() ) based on the status. Now as I said I just used the Address NAK behavior there, when you have no slave connected you should see an address NAK (I should actually check why you are not getting one), but in this case we are saying a timeout requires the same behavior which should work ok. If it does not we may want to add a state for timeout as I described before. But let's try the timeout using the I2C_ADDRESS_NACK method first for your board. If this does not work I will crack out a board and run it on the hardware to see exactly what is happening.
  6. It is just the other layer, it has mssp at the bottom, i2c_driver and master and simple on top of that. I just did not pay close enough attention there, I did not check if it would compile, let me check it out tomorrow and I can help you to get that to build.
  7. Last time that happened to me was yesterday!
  8. Ok, I am back up and running and I see that with that you will end up stuck in the next loop. You can force it to break out of the loop by simulating an error. The correct error for your case, and correct behavior in most cases when you get a timeout, would be to perform the AddressNAK behavior. You can trigger that by doing this: inline void mssp1_waitForEvent(uint16_t *timeout) { uint16_t to = 50000; // This will cause a 50ms timeout on I2C if(PIR3bits.SSP1IF == 0) { while(to--) { if(PIR3bits.SSP1IF) break; __delay_us(1); // We reduce this to 1us, the while loop will add some delay so the timeout will be at least 50ms } if (to == 0) { i2c_status.state = I2C_ADDRESS_NACK; // State Override for timeout case } } } If you want different behavior in the case of a timeout than you have for an address NAK you can always add an event to the state table called stateHandlerFunction, and at this location set the state to the new entry, and then copy the do_I2C_DO_ADDRESS_NACK() function and change the behavior in that to what you want to do different for a timeout. You may e.g. set the error to I2C_TIMEOUT which you could add to i2c_error_t and you probably do not want to do the do_I2C_SEND_RESTART. All of that is of course a more substantial change, but I could walk you through this if you want.
  9. You can use statically linked memory (like a global array of items) to allocate, or you can allocate it on the stack by creating the variable in the function where it is being used. In the case where you statically allocate it the case where you run out of memory will cause the code not to compile, which means of course when it does compile you are guaranteed that you can never fail in this way. If you allocate from any pool (or heap) you will always have to be excessively careful as things like fragmentation can easily catch you. FreeRTOS e.g. has a safe heap implementation which has malloc but no free, that allows you to call malloc during init to initialize all the memory you need, and get determinisitc failure if you do not have enough memory which is easy to debug. If you have a free then making it safe is substantially more difficult because you will fail at the point of maximum memory pressure and this will likely be a race condition which may have low probability of hapening during your testing. The best course of action is to design the system in such a way that it does not compile when you do not have sufficient resources, that way there is no guessing game. Proper error checking only gets you so far. I have seen a case where the init function of the UART (which was the only user interface) failed to allocate memory (linker settings were wrong of course), but the point is that error checking would not have helped much in that case. I have also seen similar cases where there is no way to recover especially in bootloaders.
  10. That delay is central to solving your problem. It is really part of some unfinished timeout code at the top of that function which you thankfully included into your question! The idea was that you can use the concept shown in the comments to produce a timeout in the driver. If you do not expect any timeouts you can safely remove the delay, it simply makes the timeout be a multiple of the delay number, so if we delay 100us, and we set timeout to 100 (the default) then we will get a 10ms timeout. What I would suggest is that you complete the timeout implementation as follows: First you need to set the timeout variable to a fixed value, and then we will reduce the unit delay to 1us (you could remove it altogether and the timeout will be 50,000 loop cycles. If this is too short you can always increase the variable to 32bit and increase the count or add a couple of NOP's to the function. It should look like this when you are done: inline void mssp1_waitForEvent(uint16_t *timeout) { uint16_t to = 50000; // This will cause a 50ms timeout on I2C if(PIR3bits.SSP1IF == 0) { while(to--) { if(PIR3bits.SSP1IF) break; __delay_us(1); // We reduce this to 1us, the while loop will add some delay so the timeout will be at least 50ms } } } MPLAB-X has died on me today, I am re-installing and will check out if this is sufficient to fix your problem as soon as I have it back up and running, in the meantime you could try the above.
  11. All too often I see programmers stumped trying to lay out the folder structure for their embedded C project. My best advice is that folder structure is not the real problem. it is just one symptom of dependency problems. If we fix the underlying dependencies a pragmatic folder structure for your project will probably be obvious due to the design being sound. In this blog I am going to first look briefly at Modularity in general, and then explore some program folder structures I see often, exploring if and why they smell. On Modularity in general Writing modular code is not nearly as easy as it sounds. Trying it out for real we quickly discover that simply distributing bits of code across a number of files does not solve much of our problems. This is because modularity is about Architecture and Design and, as such, there is a lot more to it. To determine if we did a good job we need to first look at WHY. WHY exactly do we desire the code to be modular, or to be more specific - what exactly are we trying to achieve by making the code modular? A lot can be said about modularity but to me, my goals are usually as follows: Reduce working set complexity through divide and conquer. Avoid duplication by re-using code in multiple projects (mobility). Adam Smith-like division of labor. When code is broken down into team-sized modules we can construct and maintain it more efficiently. Teams can have areas of specialization and everyone does not have to understand the entire problem in order to contribute. In engineering, functional decomposition is a process of breaking a complex system into a number of smaller subsystems with clear distinguishable functions (responsibilities). The purpose of doing this is to apply the divide and conquer strategy to a complex problem. This is also often called Separation of Concerns. If we keep that in mind we can test for modularity during code review by using a couple of simple core concepts. Separation: Is the boundary of every module clearly distinguishable? This requires every module to be in a single file, or else - if it spans multiple files - a single folder which encapsulates the contents of the module into a single entity. Independent and Interchangeable: This implies that we can also use the module in another program with ease, something Robert C Martin calls Mobility. A good test is to imagine how you would manage the code using version control systems if the module you are evaluating had to reside in a different repository, have its own version number and its own independent documentation. Individually testable: If a module is truly independent it can be used by itself in a test program without bringing a string of other modules along. Testing of the module should follow the Open-Closed principle which means that we can create our tests without modifying the module itself in any way. Reduction in working set Complexity: If the division is not making the code easier to understand it is not effective. This means that modules should perform abstraction - hiding as much of the complexity inside the module and exposing a simplified interface one layer of abstraction above the module function. Software Architecture is in the end all about Abstraction and Encapsulation, which means that making your code modular is all about Architecture. By dividing your project into a number of smaller, more manageable problems, you can solve each of these individually. We should be able to give each of these to a different autonomous team that has its own release schedule, it's own code repository and it's own version number. Exploring some program file structures Now that we have established some ground rules for testing for modularity, let's look at some examples and see if we can figure out which ones are no good and which ones can work based on what we discussed above. Example 1: The Monolith I would hope that we can all agree that this fails the modularity test on all counts. If you have a single file like this there really is only one way to re-use any of the code, and that is to copy and paste it into your other project. For a couple of lines this could still work, but normally we want to avoid duplicating code in multiple projects as this means we have to maintain it in multiple places and if a bug was found in one copy there would be no way to tell how many times the code has been copied and where else we would have to go fix things. I think what contributes to the problem here is that little example projects or demo projects (think about that hello world application) often use this minimalistic structure in the interest of simplifying it down to the bare minimum. This makes sense if we want to really focus on a very specific concept as an example, but it sets a very poor example of how real projects should be structured. Example 2: The includible main In this project, main.c grew to the point where the decision was made to split it into multiple files, but the code was never redesigned, so the modules still have dependencies back to main. That is usually when we see questions like this on Stack Overflow. Of course main.c cannot call into module.c without including module.h, and the module is really the only candidate for including main.h, which means that you have what we call a circular dependency. This mutual dependency indicates that we do not actually have 2 modules at all. Instead, we have one module which has been packaged into 2 different files. Your program should depend on the modules it uses, it does not make sense for any of these modules to have a reverse dependency back to your program, and as such it does not make any sense to have something like main.h. Instead, just place anything you are tempted to place in main.h at the top of main.c instead! If you do have definitions or types that you think can be used by more than one module then make this into a proper module, give it a proper name and let anything which uses this include this module as a proper dependency. Always remember that header files are the public interfaces into your C translation unit. Any good Object Oriented programming book will advise you to make as little as possible public in your class. You should never expose the insides of your module publically if it does not form part of the public interface for the class. If your definitions, types or declarations are intended for internal use only they should not be in your public header file, placing them at the top of your C file most likely the best. A good example is device configuration bits. I like to place my configuration bit definitions in a file by itself called device_config.h, which contains only configuration bit settings for my project. This module is only used by main, but it is not called main.h. Instead, it has a single responsibility which is easy to deduce from the name of the file. To keep it single responsibility I will never put other things like global defines or types in this file. It is only for setting up the processor config bits and if I do another project where the settings should be the same (e.g. the bootloader for the project) then it is easy for me to re-use this single file. In a typical project, you will want to have an application that depends on a number of libraries, something like this. Importantly we can describe the program as an application that uses WiFi, TempSensors, and TLS. There should not be any direct dependencies between modules. Any dependencies between modules should be classified as configuration which is injected by the application, and the code that ties all of this together should be part of the application, not the modules. It is important that we adhere to the Open-Closed principle here. We cannot inject dependencies by modifying the code in the libraries/modules that we use, it has to be done by changing the application. The moment we change the libraries to do this we have changed the library in an application-specific way and we will pay the price for that when we try to re-use it. It is always critical that the dependencies here run only in one direction, and that you can find all the code that makes up each module on your diagram in a single file or in a folder by itself to enable you to deal with the module as a whole. Example 3: The Aggregate or generic header file Projects often use an aggregate header file called something like "includes.h". This quickly leads to the pattern where every module depends on every other and is also known as Spaghetti Code. It becomes obvious if you look at the include graph or when you try and re-use a module in your project by itself for e.g. a test. When any header file is changed you have to re-test every module now. This fails the test of having clearly distinguishable boundaries and clear and obvious dependencies between modules. In MCC there is a good (or should I say bad?) example of such an aggregate header file called mcc.h. I created a minimal project using MCC for the PIC16F18877 and only added the Accel3 click to the project as a working example for this case. The include graph generated using Doxygen looks as follows. There is no indication from this graph that the Accelerometer is the one using the I2C driver, and although main never calls to I2C itself it does look like that dependency exists. The noble intention here is of course to define a single external interface for MCC generated code, but it ends up tying all of the MCC code together into a single monolithic thing. This means my application does not depend on the Accelerometer, it now depends instead on a single monolithic thing called "everything inside of MCC", and as MCC grows this will become more and more painful to manage. If you remove the aggregate header then main no longer includes everything and the kitchen sink, and the include graph reduces to something much more useful as follows: This works better because now the abstractions are being used to simplify things effectively, and the dependency of the sensor on I2C is hidden from the application level. This means we could change the sensor from I2C to SPI without having any impact on the next layer up. Another version of this anti-pattern is called "One big Header File", where instead of making one header that includes all the others, we just place all the contents of all those headers into a single global file. This file is often called "common.h" or "defs.h" or "global.h" or something similar. Ward Cunningham has a good comprehensive list of the problems caused by this practice on his wiki. Example 4: The shared include folder This is a great example of Cargo Culting something that sometimes works in a library, and applying it everywhere without understanding the consequences. The mistake here is to divide the project into sources and headers instead of dividing it into modules. Sources and headers are hopefully not the division that comes to mind when I ask you to divide code into modules! In the context of a library, where the intention is very much to have an external interface (include) separated from its internal implementation (src), this segregation can make sense, but your program is not a library. When you look at this structure you should ask how would this work in the following typical scenarios: If one of the libraries grows enough that we need to split it into multiple files? How will you now know which headers and/source belong to which library? If two libraries end up with identically named files? Typical examples of collisions are types.h, config.h, hal.h, callbacks.h or interface.h. If I have to update a library to a later version, how will I know which files to replace if they are all mixed into the same folder? How do I know which files are part of my project, and as such, I should maintain them locally, as opposed to which files are part of a library and should be maintained at the library project location which is used in many projects? This structure is bad because it breaks the core architectural principles of cohesion and encapsulation which dictates that we keep related things together, and encapsulate logical or functional groupings into clearly identifiable entities. If you do not get this right it leads to library files being copied into every project, and that means multiple copies of the same file in revision control. You also end up with files that have nothing to do with each other grouped together in the same folder. Example 5: A better way On the other hand, if you focus on cohesion and encapsulation you should end up with something more like this I am not saying this is the one true way to structure your project, but with this arrangement, we can get the libraries from revision control and simply replace an entire folder when we do. It is also obvious which files are part of each library, and which ones belong to my project. We can see at a glance that this project has it's own code and depends on 3 libraries. The structure embodies information about the project which helps us manage it, and this information is not duplicated requiring us to keep data from different places in sync. We can now include these libraries into this, or any other project, by simply telling Git to fetch the desired version of each of these folders from its own repository. This makes it easy to update the version of any particular library, and name collisions between libraries are no longer an issue. Additionally, as the library grows it will be easy to distinguish in my code which library I have a dependency on, and exactly which types.h file I am referring to when I refer to the header files as follows. Conclusion Many different project directory structures could work for your project. We are in no way saying that this is "the one true structure". What we are saying is that when the time comes to commit your project to a structure, do remember the pros and cons of each of these examples we discussed. That way you will at least know the coming consequences of your decisions before you are committed to them. Robert C. Martin, aka Uncle Bob, wrote a great article back in 2000 describing the SOLID architectural principles. SOLID is focussed on managing dependencies between software modules. Following these principles will help create an architecture that manages the dependencies between modules well. A SOLID design will naturally translate into a manageable folder structure for your embedded C project.
  12. Some advice for Microchip: If this was my product I would stop selling development kits with A1 or A3 silicon to customers. I2C is widely used and it will create a really bad impression of the product's reliability if customers were to evaluate it with defective silicon. And please fix the Errata, your workaround for I2C Issue 1 does not work as advertized !
  13. Ok some feedback on this one. The workaround in the Errata turns out does not work. The Errata claims you can But this does not work at all. We clear BCL and wait for both S and P bits to be 0 but this never happens and we end up waiting forever. As an attempt to work around this we decided to try to reset the entire module, this means that we set the ON bit in I2CxCON to 0 to disable the module, this resets all the status bits and resets the I2C state machine, once this is done we wait 4 clock cycles (since the second workaround in the Errata suggests we should wait for 4 clock cycles) and then we set the ON bit back to a 1. This clears the BCL error condition correctly and allows us to continue using the peripheral. We have not yet tried to implement the workaround with the timeout that resets the I2C peripheral if it becomes unresponsive without warning, that will be coming up next, but it does seem like that will work fine as it will also disable the entire module when the condition happens which seems to clean out the HW state machine which it looks like is the culprit here. The I2C peripheral section 24 of the family datasheet can be found here http://ww1.microchip.com/downloads/en/devicedoc/61116f.pdf
  14. I am struggling to figure out how to work around what seems to be a silicon bug in the PIC32MZ2048EFM on A1 silicon. I am using the development kit DM320104. From MPLABX I can see that the board I have is running A1 revision silicon. Looking at the Errata for the device I found that there is a silicon Errata on the I2C peripheral and I am hitting at least 2 of the described problems. β€’ False Error Condition 1: False Master Bus Collision Detect (Master-mode only) – The error is indicated through the BCL bit (I2CxSTAT). β€’ False Error Condition 3: Suspended I2C Module Operations (Master or Slave modes) – I2C transactions in progress are inadvertently suspended without error indications. In both cases the Harmony I2C driver ends up in a loop never returning again. For condition 1 the ISR keeps triggering and I2C stops working and for condition 3 the driver just gets stuck. I have tried to implement the workarounds listed in that Errata but I seem to have no luck. The Errata does not have an example, only a text description so I was hoping someone on here has tried this and can help me figure out what I am doing wrong. Currently for condition 1 from the bus collision ISR we are clearing the ISR flag and the BCL bit and then setting the start bit in the I2C1STAT register, but the interrupt keeps on firing away and no start condition is happening. Any idea what we are doing wrong?
  15. Absolutley, and nice examples! Hungarian notation breaks the abstraction of having a variable name with unspecified underlying storage, so I think it is the worst way to leak implementation details!
  16. I think specifically we need to know what processor you are trying to use as this differs from device to device. The simplest and most generic answer would be to add the UART to your project and click on the checkbox to enable interrupts for the driver. After generating code you will have to set the callback which you want called when the interrupt occurs. After this you need to make sure you are enabling interrupts in your main code and it should work. If you supply us with the details above I will post some screenshots for you on how to do this. Just to show you the idea I picked the 16F18875 and added the EUSART as follows: You can see I clicked next to "Enable EUSART Interrupts" Then in my main I ensured the interrupts are enabled. When I now run the code the ISR created by MCC is executed every time a byte is received. The ISR function is called EUSART_Receive_ISR and it is located in the eusart.c file. You can edit this function or replace it by setting a different function as ISR by calling EUSART_SetRxInterruptHandler if you want to change the behavior.
  17. Yes Doxygen is great for that, it also allows you to click on the boxes and drill down into the details. I use it for this all of the time!
  18. With C it can be very tricky. The linker will resolve the symbols at link time and up until then you cannot trace the dependencies in any easy way. You can try with something that does static code analysis, but if you are using #defines it can be unreliable if you do not get all of the settings correct, especially if some of your #ifdefs depend on things the compiler defines for you. The best thing you can do is fully explore which files are being used, even if that means removing them one at a time and testing it all out. Always include only the files that you are reallly using, if you have dead code in your project it just makes it harder to understand and that kind of rot just accumulates over time.
  19. Comments I was musing over a piece of code this week trying to figure out why it was doing something that seemed to not make any sense at first glance. The comments in this part of the code were of absolutely no help, they were simply describing what the code was doing. Something like this: // Add 5 to i i += 5; // Send the packet sendPacket(&packet); // Wait on the semaphore sem_wait(&sem); // Increment thread count Threadcount++; These comments just added to the noise in the file, made the code not fit on one page, harder to read and did not tell me anything that the code was not already telling me. What was missing was what I was grappling with. Why was it done this way, why not any other way? I asked a colleague and to my frustration his answer was that he remembered that there was some discussion about this part of the code and that it was done this way for a very good reason! My first response was of course "well why is that not in the comments!?" I remember having conversations about comments being a code smell many times in the past. There is an excellent talk by Kevlin Henney about this on youtube. Just like all other code smells, comments are not universally bad, but whenever I see a comment in a piece of code my spider sense starts tingling and I immediate look a bit deeper to try and understand why comments were actually needed here. Is there not a more elegant way to do this which would not require comments to explain, where reading the code would make what it is doing obvious? WHAT vs. WHY Comments We all agree that good code is code which is properly documented, referring to the right amount of comments, but there is a terrible trap here that programmers seem to fall in all of the time. Instead of documenting WHY they are doing things a particular way, they instead put in the documentation WHAT the code is doing. As Henney explains English, or whatever written language for that matter, is not nearly as precise a language as the programming language used itself. The code is the best way to describe what the code is doing and we hope that someone trying to maintain the code is proficient in the language it is written in, so why all of the WHAT comments? I quite like this Codemanship video, which shows how comments can be a code smell, and how we can use the comments to refactor our code to be more self-explanatory. The key insight here is that if you have to add a comment to a line or a couple of lines of code you can probably refactor the code into a function which has the comment as the name. If you have a line which only calls a function that means that the function is probably not named well enough to be obvious. Consider taking the comment and using it as the name of the function instead. This blog has a number of great examples of how NOT to comment your code, and comical as the examples are the scary part is how often I actually see these kinds of comments in production code! It has a good example of a "WHY" comment as follows. /* don't use the global isFinite() because it returns true for null values */ Number.isFinite(value) So what are we to do, how do we know if comments are good or bad? I would suggest the golden rule must be to test your comment by asking whether is it explaining WHY the code is done this way or if it is stating WHAT the code is doing. If you are stating WHAT the code is doing then consider why you think the comment is necessary in the first place. First, consider deleting the comment altogether, the code is already explaining what is being done after all. Next try to rename things or refactor it into a well-named method or fix the problem in some other way. If the comment is adding context, explaining WHY it was done this way, what else was considered and what the trade-offs were that led to it being done this way, then it is probably a good comment. Quite often we try more than one approach when designing and implementing a piece of code, weighing various metrics/properties of the code to settle finally on the preferred solution. The biggest mistake we make is not to capture any of this in the documentation of the code. This leads to newcomers re-doing all your analysis work, often re-writing the code before realizing something you learned when you wrote it the first time. When you comment your code you should be capturing that kind of context. You should be documenting what was going on in your head when you were writing the code. Nobody should ever read a piece of your code and ask out loud "what were they thinking when they did this?". What you were thinking should be there in plain sight, documented in the comments. Conclusion If you find that you need to find the right person to maintain any piece of code in your system because "he knows what is going on in that code" or even worse "he is the only one that knows" this should be an indication that the documentation is incomplete and more often than not you will find that the comments in this code are explaining WHAT it is doing instead of the WHY's. When you comment your code avoid at all costs explaining WHAT the code is doing. Always test your comments against the golden rule of comments, and if it is explaining what is happening then delete that comment! Only keep the WHY comments and make sure they are complete. And make especially sure that you document the things you considered and concluded would be the wrong thing to do in this piece of code and WHY that is the case.
  20. Because in C89 this would be a syntax error. The syntax did not exist until it was introduced in C99 together with named initializers. In C89 it was not possible to initialize a union by it's second member because it was not possible to name the target member. This is important because many compilers today are still not fully C99 compliant and support only some of it's constructs, which means that if you use named initializers your code may be less portable because some compilers may still choke on that syntax. This example is verbatim from the C99 standard section 6.7.7 paragraph 6. The answer to your question is right there in the last sentence "The first two bit-field declarations differ in that unsigned is a type specifier (which forces t to be the name of a structure member), while const is a type qualifier (which modifies t which is still visible as a typedef name). " So in other words because of the "unsigned" the t is forced to be the name of the member and it is NOT the type of the member as you may expect. This means that when used like that the member is indeed not unnamed, it is named as t of type unsigned and the typedef from above is not applicable at all. I know, that is why even in the standard they refer to this as "obscure"! I have no idea, navigation keys and Enter work just fine for me. I am using Google Chrome, perhaps it is the browser or a setting. Which browser are you using?
  21. This happens from time to time, I have also had periods where nobody could post anything. If that happens please do report it here, it may just help some poor soul who is desperately looking for help πŸ™‚
  22. Something that comes up all the time, PWM resolution. Engineers are often disappointed when they find out that the achievable resolution of the PWM is not nearly what they expected from the headline claims made in the microcontroller datasheet. Here is just one example of a typically perplexed customer. Most of the time this is not due to dishonest advertising but rather a easily overlooked property of how PWM's work, so lets clear that up so that we can avoid the disappointment. A conventional PWM will let you set the period and the duty cycle something like the image to the right shows. In the picture Tsys represents the clock of the PWM, and Tpwm shows the period of the PWM. In hte example the perod (Tpwm) of the PWM is 4x the system clock. Additionally you can set the duty cycle register, which will let you choose for how many of the Tsys clocks that fit into one Tpwm the output should remain high. In the example the duty cycle is set to 3. It is typical for a microcontroller datasheet to advertise that it can accomodate a duty cycle of 12-bits or 16-bits or something in that order. Of course this is an achievable number of clocks for the PWM to remain high, but the range of the number is always going to be limited by the period of the PWM. That means that if we select the period to be 2^16 = 65536 clocks, then we will also be able to control the duty cycle up to 16 bits. It is easy to make the mistake of believing that you will get 16 bits of resolution over the achievable range, but this is very rarely the case. Let's look at some real numbers as an example using the PIC16F1778. The first page from the datasheet can be seen to the right. It advertises here that the PWM on this device is 16-bit. Importantly it also shows that the timer capability is limited to 16-bit. Looking at the PWM's on this device we will try to see what is the highest frequency (lowest period) at which we can get 16-bits of PWM resolution. The fastest clock this PWM can use as it's time base is the system clock, which is limited to 32MHz on this device. That means in terms of Figure 1 above that Tsys would be the period of one clock at 32MHz = 31.25ns. If we want to achieve the full resolution of the PWM we have to run the timer at it's 16-bit limit, which means that the PWM frequency will be 32MHz / 2^16 = 488Hz ! So if you need the PWM frequency to be anything more than that you will have to compromise on the resolution in order to achieve a faster switching frequency. Typically engineers will try to run at switching frequencies above 20kHz because this is roughly the audible range of the human ear. If you switch at a lower frequency people will hear a hum or a high pitched tone which can be very irritating. So let's say we compromise to the lowest limit here and try to run the PWM at 20kHz, how much of the PWM resolution will we be giving up by using a higher frequency? The easiest way to calculate this is to simply realize that one clock is 31.25ns and the resolution of the PWM will be limited to how many times 31.25ns fits into the period of the PWM. At 488Hz the period is 1/488Hz = 2ms and we can calculate that 2ms/31.25ns = 65536. We can determine how many bits are required to represent that by taking log(65536)/log(2) = 16-bits of resolution. This would mean that to get the number of usable options for duty cycle at 20kHz we need to calculate (1/20kHz)/31.25ns = 1600. So with 1600 divisions the resolution of the PWM is reduced to log(1600)/log(2) = 10.64bits, which means that we achieve slightly better than 10-bits of resolution. This is the point where people are usually unhappy that the advertised 16-bits of resolution has somehow evaporated and turned into only 10 bits! So the advice I have for you is this. When selecting a device where you have PWM resolution requirements you better make sure you do all the math to make sure that you can run at the resolution you need with the clocks you have available. And remember that when it comes to PWM resolution the PWM clock speed is always going to be king, so it is typically better to select the device with the higher clock speed instead of the one that claims the highest PWM resolution (at a snail's pace...) And you you feel adventurous you can always try something more exotic like using the NCO to generate a high resolution PWM at high frequencies as described in this application note
  23. I2C is such a widely used standard, yet it has caused me endless pain and suffering. In principle I2C is supposed to be simple and robust, a mechanism for "Inter-Integrated Circuit" communication. I am hoping that this summary of the battle scars I have picked up from using I2C might just save you some time and suffering. I have found that despite a couple of typical initial hick-ups it is generally not that hard to get I2C communication going, but making it robust and reliable can prove to be a quite a challenge. Problem #1 - Address Specification I2C data is not represented as a bit-stream, but rather a specific packet format with framing (start and stop conditions) preceded by an address, which encapsulates a sequence of 8-bit bytes, each followed by an ACK or NAK bit. The first byte is supposed to be the address, but right from the bat, you have to deal with the first special case. How to combine this 7-bit address with the R/W bit always causes confusion. There is no consistency in datasheets of I2C slave devices for specifying the device address, and even worse most vendors fail to specify which approach they use, leaving users to figure it out through trial and error. This has become bad enough that I would not recommend trying to implement I2C without an oscilloscope in hand to resolve these kinds of guessing games. Let's say the 7-bit device address was 0x76 (like the ever-popular Bosh Sensortech BME280). Sometimes this will be specified simply as 0x76, but the API in the software library, in order to save the work of shifting this value by 1 and masking in the R/W bit will often require you to pass in 0xEC as the address (0x76 left-shifted by one). Sometimes the vendor will specify 0xEC as the "write" address and 0xED as the "read" address. To add insult to injury your bus analyzer or Saleae will typically show the first 8-bits as a hex value so you will never see the actual 7-bit address as a hex number on the screen, leaving you to be bit twiddling in your head on a constant basis while trying to make sense of the traces. Problem #2 - Multiple Addresses To add to the confusion from above many devices (like the BME280) has the ability to present on more than one address, so the datasheet will specify that (in the case of the BME280) if you pull down the unused SDO pin on the device it's address will be 0x76, but if you pull the pin up it will be 0x77. I have seen many users leave this "unused" pin floating in their layouts, causing the device to schizophrenically switch between the 2 addresses at runtime and behavior to look erratic. This also, of course, doubles the number of possible addresses the device may end up responding to, and the specification of exactly 2 addresses fools a lot of people into thinking that the vendor is actually specifying a read and write address as described above. This all adds to the guessing game of what the actual device address may be. To add to the confusion most devices have internal registers and these also have their own addresses, so it is very easy to get confused about what should go in the address byte. It is not the register address, it is the slave address, the register address goes in the data byte of the "write" you need to use if you want to do a "read", in order to read a register from a specific address on the slave. Ok, if that is not confusing to you I salute you sir! Problem #3 - 10-bit address mode As if there was not enough address confusion already the limitation of only 127 possible device addresses lead to the inclusion of an extension called 10-bit addressing. A 10-bit address is actually a pre-defined 5-bits, followed by the 2 most significant bits of the 10-bit address, then the R/W bit, after this an Ack from all the devices on the bus using 10-bit addressing with the same 2 MSB addresses, and after this the remaining 8 bits of the address followed by the real/full address ack. So once again there is no standard way to represent the 10-bit address. Let's say the device has 10-bit address 0x123, how would this be specified now? The vendor could say 0x123 (and only 10 of the 12 bits implied are the 10-bit address), or they could include the prefix and specify it as 0xF223. Of course that number contains the R/W bit in the middle somewhere, so they may specify a "read" and a "write" address as 0xF223 and 0xF323, or they could right-shift the high-byte to show it as a normal 7-bit address, removing the R/W bit, and say it is 0x7123. I think you get the picture here, lots of room for confusion and we have not even received our first ACK yet! Problem #4 - Resetting during Debugging Since I2C is essentially transaction/packet based and it does not include timeouts in the specification (SMBUS does of course, but most slave sensors conform to I2C only) there is a real chance that you are going to reset your host processor (or bus master) in the middle of such a transaction. This happens as easily as re-programming the processor during development (which you will likely be doing a lot). The problem that tends to catch everybody at some point is that a hardware reset of your host processor is entirely invisible to the slave device which does not lose power when you toggle the master device's reset pin! The result is that the slave thinks that it is in the middle of an I2C transaction and awaits the expected number of master clock pulses to complete the current transaction, but the master thinks that it should be creating a start condition on the bus. This often leads to the slave holding the data line low and the master unable to generate a start condition on the bus. When this happens you will lose the ability to communicate with the I2C sensor/slave and start debugging your code to find out what has broken. In reality, there is nothing wrong with your code and simply removing and re-applying the power to the entire board will cause both the master and slave to be reset, leaving you able to communicate again. Of course, re-applying the power typically causes the device to start running, and if you want to debug you will have to attach the debugger which may very well leave you in a locked-up state once again. The only way around this is to use your oscilloscope or Saleae all of the time and whenever the behavior seems strange stare very carefully at what is happening with the data line, is the address going out, is the start condition recognized and is the slave responding as it should, if not you are stuck and need to reset the slave device somehow. Problem #5 - Stuck I2C bus The situation described in #4 above is often referred to as a "stuck bus" condition. I have tried various strategies in the past to robustly recover from such a stuck bus condition programmatically, but they all come with a number of compromises. Firstly slave devices are essentially allowed to clock-stretch indefinitely, and if a slave device state machine goes bonkers it is possible that a single slave device can hold the entire bus hostage indefinitely and the only thing you can possibly do is remove the power from all slave devices. This is not a very common failure mode but it is definitely possible and needs addressing for robust or critical systems. Often getting the bus "unstuck" is as simple as providing the slave device enough clocks to convince it that the last transaction is complete. Some slaves behave well and after clocking them 8 times and providing a NAK they will abort their current transaction. I have seen slaves, especially I2C memories, where you have to supply more than 8 clocks to be certain that the transaction terminates, e.g. 32 clocks. I have also seen specialized slave devices that will ignore your NAK's and insist on sending even more data e.g. 128 or more bits before giving up on an interrupted transaction. The nasty part about getting an I2C bus "unstuck" is that you usually not use the I2C peripheral itself to do this service. This means that typically you will need to disable the peripheral, change the pins to GPIO mode and bit-bang the clocks you need out of the port, after which you need to re-initialize the I2C peripheral and try the next transaction, and if this fails then rinse and repeat the process until you succeed. This, of course, is expensive in terms of code space, especially on small 8-bit implementations. Problem #6 - Required Repeated Start conditions The R/W bit comes back to haunt us for this one. The presence of this bit implies that all transactions on I2C should be uni-directional, that is they must either read or write, but in practice, things are not that simple. Typically a sensor or memory will have a number of register locations inside of the device and you will have to "write" to the device to specify which location you wish to address, followed by "reading" from the device to get the data. The problem with a bus is that something may interrupt you between these two operations that form one larger transaction. In order to overcome this limitation, I2C allows you to concatenate 2 I2C operations into a single transaction by omitting the stop condition between them. So you can do the write operation and instead of completing it with a stop condition on the bus you can follow with a second start condition and the latter half of the operation, terminating the whole thing with a stop condition only when you are done. This is called a "repeated start" condition and looks as follows (from the BME280 datasheet). It can often be quite a challenge to generate such a repeated start condition as many I2C drivers will require you to specify read/write and a pointer and number of bytes and not give you the option to omit the stop condition, and many slave devices will reset their state machines at a stop condition so without a repeated start it is not possible to communicate with these devices. Of course, I should also mention that the requirement to send the slave address twice for these transactions significantly reduces the throughput you can get through the bus. Problem #7 - What is Ack and Nak supposed to be? This brings us to the next problem. It is quite clear that the Address is ack-ed by the slave, but when you are reading data what is the exact semantics of the ack/nak? The BME280 datasheet is a bit unique in that it clearly distinguishes in that figure in #6 above whether the ack should be generated by the master or the slave (ACKS vs ACKM), but from the specification, it is not immediately clear. If I read data from a slave, who is supposed to provide the ack at the end of the data? Is this the master or the slave? What would be the purpose of the master providing an ack to the slave to data? Clearly, the master is alive as it is generating clocks, and the slave may be sending all 1's which means it does not touch the bus at all. So what is the slave supposed to do if the master makes a Nak in response to a byte? And how would a slave determine if it should Nak your data since there is no checksum or CRC on it there is no way to determine if it is correct? None of this is clearly specified anywhere. To add confusion I have seen people spend countless hours looking for the bug in their BME280 code which causes the last data byte to get a NAK! When you look on the bus analyzer or Oscilloscope you will be told that every byte was followed by an ACK except for the last one where you will see a NAK. Most people interpret this NAK to be an indication that something is wrong, but no, look carefully at the image from the datasheet in section #6 above! Each byte received by the master is followed by an ACKM (ack-ed by the master) EXCEPT for the last byte, in which case the master will not ACK it, causing a NAK to proceed the stop condition! To make this even harder, most I2C hardware peripherals will not allow you fine-grained control of whether the master will ACK or NAK. Very often the peripheral will just blithely ack every byte that it reads from the slave regardless. Problem #8 - Pull-up resistors and bus capacitance The I2C bus is designed to be driven only through open-drain connections pulling the bus down, it is pulled up by a pair of pull-up resistors (one on the clock line and one on the data line). I have seen many a young engineer struggle with unreliable I2C communication due to either the entire lack of or incorrect pull-up resistors. Yes it is possible to actually communicate even without the resistors due to parasitic pull-ups which will be much larger than required, meaning that it will pull weakly enough to get the bus high-ish and under some conditions can provoke an ack from a slave device. There is no clear specification of what the size of these pull-up resistors should be, and for good reason, but this causes a lot of uncertainty. The I2C specification does specify that the maximum bus capacitance should be 400pF. This is a pretty tricky requirement to meet if you have a large PCB with a number of devices on the bus and it is often overlooked, so it is typical to encounter boards where the capacitance is exceeding the official specification. In the end, the pull-up needs to be strong enough (that is small enough) to pull the bus to Vdd fast enough to communicate at the required bus speed (typically 100kHz or 400kHz). The higher the bus capacitance is the stronger you will have to pull up in order to bring the bus to Vdd in time. If you look at the Oscilloscope you will see that the bus goes low fairly quickly (pulled down strongly to ground) but goes up fairly slowly, something like this: As you can see in the trace to the right there are a number of things to consider. If your pull-ups are too large you will get a lot of interference as indicated by the red arrows in the trace where the clock line is coupling through onto the data line which has too high an impedance. This can often be alleviated with good layout techniques, but if you see this on your scope consider lowering the pull-up value to hold the bus more steady. If the rise times through the pull-up are too slow for the bus speed you are using you will have to either work on reducing the capacitance on the bus or pulling up harder through a smaller resistor. Of course, you cannot just tie the bus to Vdd in the extreme as it still needs to be pulled to 0 by the master and slaves. As a last consideration, the smaller the resistor is you use the more power you will consume while driving the bus. Problem #9 - Multi-master I have been asked many times to implement multi-master I2C. There are a large number of complications when you need multiple masters on an I2C bus and this should only be attempted by true experts. Arbitrating the bus when multiple masters are pulling it down simultaneously presents a number of race conditions that are extremely hard to robustly deal with. I would like to just point out one case here as illustration and leave it at that. Typical schemes for multi-master will require the master to monitor the bus while it is emitting the address byte. When the master is not pulling the bus low, but it reads low this is an indication that another master is trying to emit an address at the same time and you as a master should then abort your transaction immediately, yielding the bus to the other master. A problem arises though when both masters are trying to read from the same slave device. When this happens it is possible that both addresses match exactly and that the 2 masters start their transactions in close proximity. Due to the clock skew between the masters, it is possible that they are trying to read from different control registers on the slave, that the slave will match only one of the 2 masters, but both masters will think that they have the bus and the slave is responding to their request. When this happens the one master will end up receiving incorrect data from the wrong address. Consider e.g. a BME280 where you may get the pressure reading instead of humidity, causing you to react incorrectly. Like I said there are many obscure ways multi-master can fail you, so beware when you go there. Problem #10 - Clock Stretching In the standard slaves are allowed to stretch the clock by driving the clock line low after the master releases it. Clock stretching slaves are a common cause of I2C busses becoming stuck as the standard does not provide for timeouts. This is something where SMBUS has provided a large improvement over the basic I2C standard, although there can still be ambiguity around how long you really have to wait to ensure that all slaves have timed out, and the idea with SMBUS is that you can safely mix with non-SMBUS slaves, but this one aspect makes it unreliable to do so. In critical systems, you will as a result very often see 2 I2C slave devices connected via different sets of pins, using I2C as a point to point communications channel instead of a bus in order to isolate failure conditions to a single sensor. Problem #11 - SMBUS voltage levels In I2C logical 1 voltage levels depends on the bus voltage and are above 70% of bus voltage for a 1 and below 30% for a 0. The problems here are numerous, resulting in different devices seeing a 0 or 1 at different levels. SMBUS devices do not use this mechanism but instead specify thresholds at 0.8v and 2.1v. These levels are often not supported by the microcontroller you are using leaving some room for misinterpretation, especially if you add the effects of bus capacitance and the pull-up resistors to the signal integrity. For more information about SMBUS and where it differs from the standard I2C specification take a look at this WikiPedia page. Problem #12 - NAK Polling NAK polling often comes into play when you are trying to read from or write to an I2C memory and the device is busy. These memory devices will use the NAK mechanism to signal the master that he has to wait and retry the operation in a short while. The problem here is that many hardware I2C peripherals simply ignore acks and nak's altogether or does not give you the required hooks to respond to these. Many vendors try to accelerate I2C operations by letting you pre-load a transaction for sending to the slave and doing all of the transmission in hardware using a state machine, but these implementations rarely have accommodations for retrying the byte if the slave was to NAK it. NAK-polling also makes it very hard to use DMA for speeding up I2C transmissions as once again you need to make a decision based on the Ack/Nak after every byte, and the hooks to make these decisions typically require an interrupt or callback at the end of every byte which causes huge overhead. Problem #13 - Bus Speeds When starting to bring up an I2C bus I often see engineers starting with one sensor and working their way through them one by one. This can lead to a common problem where the first sensor is capable of high-speed transmission e.g. 1MHz, but you only need 1 sensor on the bus that is limited to 100KHz and this can cause all kinds of intermittent failures. When you have more than 1 slave on the same bus make sure that the bus is running at a speed that all the slaves can handle, this means that when you bring up the bus it is always a good idea to start things out at 100kHz and only increase the speed once you have communication established with all the slaves on the bus. The more slaves you have on the bus the more likely you will be to have an increased bus capacitance and signal integrity problems. In conclusion I2C is quite a widely used standard. When I am given the choice between using I2C or something like SPI for communication with sensor devices I tend to prefer SPI for a number of reasons. It is possible to go much faster using SPI as the bus is driven hard to both 0 and 1, the complexities of I2C and the problems outlined above inevitably raises the complexity significantly and presents a number of challenges to achieving a robust system. To be clear I am not saying I2C is not a robust protocol, just that it takes some real skill to use it in a truly robust way, and other protocols like SPI do not require the same effort to achieve a robust solution. So like the Plain White T's say, hate is a strong word, but I really really don't like I2C ... I am sure there are more ways I2C has bitten people, please share your additional problems in the comments below, or if you are struggling with I2C right now, feel free to post your problem and we will try to help you debug it!
  24. I decided to write this up as bootloaders have pretty much become ubiquitous for 32-bit projects, yet I was unable to find any good information on the web about how to use linker scripts with XC32 and MPLAB-X. When you need to control where the linker will place what part of your code, you need to create a linker script which will instruct the linker where to place each section of the program. Before we get started you should download the MPLAB XC32 C/C++ Linker and Utilities Users Guide. There is also some useful information in the MPLAB XC32 C/C++ Compiler User’s Guide for PIC32M MCUs, the appropriate version for your compiler should be in the XC32 installation folder under "docs". This business of linker scripts is quite different from processor to processor. I have recently been working quite a bit with the PIC32MZ2048EFM100, so I will target this to this device using the latest XC32 V2.15. This post will focus on what you need to to do to get the tools to use your linker script. Since XC32 is basically a variant of the GNU C compiler you can find a lot of information on the web about how to write linker scripts, here is a couple. http://www.scoberlin.de/content/media/http/informatik/gcc_docs/ld_3.html https://sourceware.org/binutils/docs-2.17/ld/Scripts.html#Scripts Adding a linker script The default linker script for the PIC32MZ2048EFM100 can be found in the compiler folder at /xc32/v2.15/pic32mx/lib/proc/32MZ2048EFM100/p32MZ2048EFM100.ld. If you need a starting point that would be a good place. For MPLAB-X and XC32 the extention of the linker script does not have any meaning. The linker script itself is not compiled, it is passed into the linker at the final step of building your program. The command line should look something like this for a simple program: "/Applications/microchip/xc32/v2.15/bin/xc32-gcc" -mprocessor=32MZ2048EFM100 -o dist/default/production/mine.X.production.elf build/default/production/main.o -DXPRJ_default=default -legacy-libc -Wl,--defsym=__MPLAB_BUILD=1,--script="myscript.ld",--no-code-in-dinit,--no-dinit-in-serial-mem,-Map="dist/default/production/mine.X.production.map",--memorysummary,dist/default/production/memoryfile.xml" The linker script should be listed on the command line as "--script="name" When you create a new project MPLAB will create a couple of "Logical Fodlers" for you. These folders are not actual folders on your file system, but files in these are sometimes treated differently, and Linker Files is a particular case of this. My best advice is not to ever rename of in any other way mess with these folders created for you by MPLAB. If you did edit configurations.xml or renamed any of these I suggest you just create new project file as there are so many ways this could go wrong fixing it will probably take you longer than just re-creating it. I have seen cases where it all looks 100% but the IDE simply does not use the linker script, just ignoring it. The normal way to add files to a MPLAB-X project is to right-click on the Logical folder you wanted the file to appear in and select which kind of file under the "New" menu. In this menu files that you use often are shown as a shortcut, to see the entire list of possible files you need to select "Other..." at the bottom of the list. Unfortunatley Microchip has not placed "Linker Script" in this list, so there is no way to discover using the IDE how to add a linker script. When it all goes according to plan (the happy path) you can simply right-click on "Linker Files" and add your script. This is also what the manual says to do of course. When you have added the file it should look like this (pay careful attention to the icon of the linker script file, it should NOT have a source code icon. It should just be a white block like this, and if this is the case the program should compile just fine using the linker script, you can confirm that the script is being passed in by inspecting the linker command line. Adding a linker script - Problems - when it all goes wrong! I noticed in the IDE that the icon for the script was actually that of a .C source file. When this happens something has gone very wrong, and the compiler will attempt to compiler your linker script as a C source file. You will end up getting an error similar to this, stating that there is "No rule to make target": CLEAN SUCCESSFUL (total time: 51ms) make -f nbproject/Makefile-default.mk SUBPROJECTS= .build-conf make[2]: *** No rule to make target 'build/default/production/newfile.o', needed by 'dist/default/production/aaa.X.production.hex'. Stop. make[1]: Entering directory '/Users/cobusve/MPLABXProjects/aaa.X' make[2]: *** Waiting for unfinished jobs.... make -f nbproject/Makefile-default.mk dist/default/production/aaa.X.production.hex make[2]: Entering directory '/Users/cobusve/MPLABXProjects/aaa.X' make[1]: *** [.build-conf] Error 2 "/Applications/microchip/xc32/v2.15/bin/xc32-gcc" -g -x c -c -mprocessor=32MZ2048EFM100 -MMD -MF build/default/production/main.o.d -o build/default/production/main.o main.c -DXPRJ_default=default -legacy-libc make: *** [.build-impl] Error 2 make[2]: Leaving directory '/Users/cobusve/MPLABXProjects/aaa.X' nbproject/Makefile-default.mk:90: recipe for target '.build-conf' failed make[1]: Leaving directory '/Users/cobusve/MPLABXProjects/aaa.X' nbproject/Makefile-impl.mk:39: recipe for target '.build-impl' failed BUILD FAILED (exit value 2, total time: 314ms) I tried jumping through every hoop here, even did the hokey pokey but nothing would work to get the IDE to accept my linker script! I even posted a question on the forum here and got no help. At first I thought I would be clever and remove the script I just added, and just re-add it to the project, but no luck there. So now I was following the instructions exactly, my project was building without the script, I right-clicked on "Linker Files" selected "Add Existing Item" and then selected my script and once again it showed up as a source file and caused the project build to fail by trying to compile this as C code 😞 Next attempt was to remove the file, then close the IDE. Open the IDE, build the project and then after this add the existing file. Nope, still does not work 😞 I know MPLAB-X from time to time will cache information and you can get rid of this by deleting everything from the project except for your source files, Makefile and configurations.xml and project.xml. I went ahead and deleted all these files, restarted the IDE, added the file again - nope - still does not work. So much for RTFM! Eventually our of desperation I tried to rename the file before adding it back in. Even this did not work until I got lucky - I changed the extention of the file to gld (a commonly used extension for gnu linker files), and tried to re-add the file, and this eventually worked ! If you are having a hard time getting MPLAB-X to add your linker script, do not dispair. You are probably not doing anywthing wrong! The right way to add a linker script to your project is indeed to just add it to "Linker Files" as they say, sometimes you just get unlucky due to some exotic bugs in the IDE. Just remove the file from your project, change the extention to something else (it seems like you can choose anything as long as it is different) and add the file back in and it should work. If not come back here and let me know and we can figure it out together :)
  25. Ok great news I figured out what was going wrong! I was working with an old project file. The project was not using a linker script before. It turns out that MPLAB is doing all kinds of strange things in the background to figure out that it has to treat files in the Logical Folder called by "name=LinkerScript" and "displayname=Linker Files" as linker scripts instead of C files, and once it has gotten itself confused about this there is no going back without recreating the entire project file. Now since ours contained hundreds of source files we tried to avoid this but alas, turns out there is not really another way :( There is an example here https://www.microchip.com/forums/m651658.aspx on how to add the item back in. This seems to only work if you add it in AND rename the item BEFORE opening the project in MPLAB-X, if you open the project first you will be out of luck. For now you will have to do a lot of trial and error, or just re-create the project if you need to add a linker script, and even then good luck, the IDE can muck it up quite easily! I think I see a blog post coming on how to get a linker script into your MPLAB-X project. It seems to be harder than it should be! Edit: I have written up my experience in a blog entry here:
×
×
  • Create New...