Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Orunmila

  1. Yes those are affiliate links and if you do buy a book or anything else starting from those links Amazon will pay us a small commission which will help keep the site up. This is also explained in the Welcome Message. Do Note that if you save the book for later we will not get any fee, clicking the link sets a cookie with Amazon which is active for 24 hours only...
  2. I can do that but later as we add to the list or remove it may get messy. Perhaps I leave the list like that and post the reviews below the basic lists? That way we get the same effect ? Edit: I did the first 2, how does that look? Edit2: Ok, there you go, I think we are in business now.
  3. I am thinking about expanding the post to show the book and next to it show a short paragraph describing why we have this one on the list. This will make the list much bigger as in it will not fit on the screen, but I think the visual aspects of it will make it much easier to find what you are looking for in the sea of letters this is becoming? Please let me have your thoughts!
  4. This is a "must read" list for Embedded Software Engineers. If we missed one please let us know in the comments! Please make a contribution to help us improve this list by leaving a comment. We are particularly interested in books we missed when compiling the list. If you leave a comment and we agree it will be added promptly. Here is "The List" in short form conveniently made up as Amazon.com links and remember if you follow any of these links before shopping on Amazon they will make a contribution to help us support this site! Scroll down for a more detailed list with cover pictures. The C Programming Language, 2nd Edition Design Patterns: Elements of Reusable Object-Oriented Software Code Complete: A Practical Handbook of Software Construction, Second Edition Making Embedded Systems: Design Patterns for Great Software Software Estimation: Demystifying the Black Art (Developer Best Practices) The Art of Computer Programming, Volumes 1-4A Boxed Set The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition Refactoring: Improving the Design of Existing Code (2nd Edition) UML Distilled: A Brief Guide to the Standard Object Modeling Language (3rd Edition) Clean Code: A Handbook of Agile Software Craftsmanship Software Architecture in Practice: Software Architect Practice_c3 (SEI Series in Software Engineering) 97 Things Every Programmer Should Know: Collective Wisdom from the Experts Programming 32-bit Microcontrollers in C: Exploring the PIC32 (Embedded Technology) The Pragmatic Programmer: From Journeyman to Master Compilers: Principles, Techniques, and Tools (2nd Edition) Applied Cryptography: Protocols, Algorithms and Source Code in C Structure and Interpretation of Computer Programs - 2nd Edition Introduction to Algorithms, 3rd Edition Honorable Mentions. Books not quite worthy of "The List" but still important recommended reading. The C99 Standard, really, you should have read this already if you are going to program anything embedded! (PDF link to the draft) Zen and the Art of Motorcycle Maintenance: An Inquiry into Values Guide to the Software Engineering Body of Knowledge (SWEBOK(R)): Version 3.0 A Guide to the Project Management Body of Knowledge (PMBOK® Guide)–Sixth Edition Happy Reading! .tg td { font-family: Arial, sans-serif; font-size: 14px; padding: 10px 5px; overflow: hidden; word-break: normal; } .tg .tg-yqpd_img { width: 200px; height: 200px; } .tg-yqpd_img img { border: 1px solid #ddd; border-radius: 4px; padding: 5px; } .tg .tg-yqpd { border-color: #ffffff; text-align: left; vertical-align: top } 1. The C Programming Language, 2nd Edition This is our No1. must read book if you are going to be doing embedded programming. Written by Kerninghan and Ritchie, the inventors of the C language. Learn how the C language was designed to work and why. It is packed with numerous excercises to ensure you understand every concept. You really should keep this on your desk as a reference if you ever get stuck. 2. Design Patterns: Elements of Reusable Object-Oriented Software Design patterns is how we communicate as Software Engineers about architectural details. If a building architect said the building should be "Tuscan Style" this would mean a wealth of things to people on the project about shape, size, colors, building materials etc. Design patterns form a similar language for Software Engineers and is a crucial tool in your arsenal. This is the original book known as the Gang of Four book or GOF for short. A must read before you venture further into other design patterns books. 3. Code Complete: A Practical Handbook of Software Construction, Second Edition This is by far the best all-round book about software development. It covers all aspects of Software Engineering to some degree, it is very thorough and a must-read just to make sure you know what is out there. 4. Making Embedded Systems: Design Patterns for Great Software This is by far the best introductory book we have seen, but it has an equal amount of gems in there for experienced campaigners, especially in the later sections on optimization (Doing less with more) and math which covers floating point issues and precision. We love the section "How NOT to use interrupts", and the one on Bootloaders for example. 5. Software Estimation: Demystifying the Black Art (Developer Best Practices) This is just a brilliant book on software project management. What makes it great is how it covers 100% of the foundational theory on estimation and planning and also covers the personal side. We love the scripts and dialogs coaching you how to present your estimates to management in such a way that they will not force unreasonable deadlines upon your team. McConnell explains that the "Science of Estimation" is mathematically intensive, uses all kinds of really complex formulae and can get estimates in the order of 5%. He then explains that this book is NOT about the science, it is more about what he calls the "Art of Estimation" which will not get you to 5%, but it will be good enough for most projects to be managed. 6. The Art of Computer Programming, Volumes 1-4A Boxed Set Computer programming is based on a lot of science. Without a solid knowledge of data structures and algorithms programming a microcontroller system is like trying to do woodwork with your bare hands scratching away with your nails. You really have to cover these fundamentals and Knuth is the all time master on teaching these fundamentals. 7. The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition This is one of those books which is quoted so often you will quickly give away the fact that you are the only one in the room who has not read this. Don't be that guy! And remember, adding more people to a project when it is late will make it even later, and putting 9 women on the job cannot create a baby in 1 month! But seriously the best part of this book is probably the capter "The Surgical Team" which really explains beautifully the core principles SCRUM and small Agile teams are built on, written decades before the rest of us realized that Fred Brooks was right all along! 8. Refactoring: Improving the Design of Existing Code (2nd Edition) Martin Fowler is probably the greatest mind in Computer Science today and he does not get the credit he deserves for it. Read this book and you will find out first hand just how much we can learn from this guy. I am not kidding if I say that his Event Sourcing Architectural pattern is THE ONLY way to go for even moderately complex embedded systems. This book covers the fundamentals you need to be Agile, get your code out there quickly so you can test your requirements and get customer feedback and then apply this book to refactor your existing code in such a way that your architecture improves and you stay on the blue line (Design Stamina Hypothesis - Google it!). 9. UML Distilled: A Brief Guide to the Standard Object Modeling Language (3rd Edition) Another Martin Fowler book. Especially in Embedded Systems we see time and again that not enough design is happening. The old saying you have to solve the problem first and then write the code is not taught enough! This book will give you all the tools you need to create the sequence diagrams, deployment diagrams and static structure diagrams you need to communicate and "reason about the system" (yes that is indeed me quoting from "Software Architecture in Practice"). 10. Clean Code: A Handbook of Agile Software Craftsmanship Uncle Bob is just a legend when it comes to the tactics of writing sofware. We are a big fan of the SOLID principles and almost everything he covers in this book can make you a better coder. Also check out his website and training videos, most of them will teach you something new and they are all entertaining as hell. 11. Software Architecture in Practice: Software Architect Practice_c3 (SEI Series in Software Engineering) Those who were lucky enough to study computer science will already have this book as every Computer Science course worth it's salt uses this as the textbook for the Architecture course. We really love how this book enumerates and covers the pros and cons of the majority of high-level architectural patterns we use in computer systems today. 12. 97 Things Every Programmer Should Know: Collective Wisdom from the Experts I discoved Kevlin Henney only recently but I love the ideas he is teaching. Things like reminging us the software is written for people to read and understand, and concepts of signal to noise ratio's in code. He explains that spaces are indeed superior to tabs and why. This book is a great collection of almost 100 tactics you can apply on a daily basis to improve your code. If you want to stand on the shoulders of giants it is critical that you heed their advice and this is a great collection of expert advice. 13. Programming 32-bit Microcontrollers in C: Exploring the PIC32 (Embedded Technology) When it comes to the PIC32 there is no better way to discover how it works and how to program it than this book. The fact that he actually works for Microchip gives Lucio amazing depth of insight into how this device was designed to be used and what it's strengths and weakneses are. In fact if you want a book to learn about PIC microcontrollers we recommend you search for Lucio Di Jasio on Amazon and pick the one for your platform! 14. The Pragmatic Programmer: From Journeyman to Master This is another one of those classic books that keeps popping up on every "best programming books" list. This book covers loads of practical advice on how to make your code better in general. Ward Cunningham reviewed it and concluded that "The Pragmatic Programmer illustrates the best practices and major pitfalls of many different aspects of software development. Whether youre a new coder, an experienced programmer". We agree! 15. Compilers: Principles, Techniques, and Tools (2nd Edition) Ok, we know that if you want to learn how to write a compiler today there are better texts than this, but this is still the book every compiler designer recommends at some point. What I love about this book is that it explains pretty early on what the compilation process looks like, which leads to understanding the reasons behind why compilers do things that can seem silly but are actually essential to produce working code consistently. It is always going to help you be a better programmer if you have at least a rundementary understanding of how compilers and linkers work, and this is a great place to start. 16.Applied Cryptography: Protocols, Algorithms and Source Code in C Security is getting more and more inportant in our connected world. Don't even try to do any security yourself unless you have read this book cover to cover. I am serious - don't! This is really the best place to learn the basic fundamentals about information security, Schneier is not only a world-renowned expert on the topic, but he has a talent for explaining an extremely complex topic in a truly accessible way. 17. Structure and Interpretation of Computer Programs - 2nd Edition Brought to you by the "MIT Electrical Engineering and Computer Science" team this is a fantastic book about the science of programming. If you are a "Tinkerer" who is happily surprized when your code runs and leaves comments like "no idea why this works" in your code this is probably not going to be for you. If you want to write robust code in a systematic way using Science and Engineering this will be a must read. If you tought yourself how to program and only knows Imperative programming then this book will go a ling way to filling in your blind spots, introducing you to a wealth of knowledge you never knew was out there. 18. Introduction to Algorithms, 3rd Edition Excellent textbook on algorithms, covering subjects from the basics like big "O" notation to advanced Boas trees and multithreaded algorithms. This book is used as textbook for the algorithms classes at universities like MIT, CMU, Stanford and Yale. Please help us improve this list by posting feedback in the comments below. Let us know if new editions are published, links are dead etc.
  5. Orunmila


    Some reading is just compulsary in Computer Science, like Shakespere is to English, you will get no admiration from your peers if it comes out that you have never heard of the Bastard Operator from Hell. There is a whole collection of BOFH stories online here http://bofh.bjash.com/index.html#Original and even a WikiPedia page of course https://en.wikipedia.org/wiki/Bastard_Operator_From_Hell. The stories were originally posted on Usenet around 1992 Simon Travaglia. I would recommend you start at #1 here http://bofh.bjash.com/bofh/bofh1.html For those who need some movitvation to click, here is a short excerpt ...
  6. Orunmila

    Melbourne Weather

    Btw. I split this conversation into a topic of its own, a moderator action which I love on this platform! No more hijacking of threads! you can use the “split” action in bulk by selecting (top right of each post) a bunch of posts like I did here, and split to a new topic, or you can do them one by one splitting into an existing topic. this means you can as moderator move any post to any other existing thread, or to a new one which will be attributed to the author of the oldest post in the thread 🙂 i do like this!
  7. Yup it sure does. On another occasion a very similar rounding error was not caught and a lot of people died, actually I seem to recall that it was the largest number of deaths in a single incident in US military history! https://hownot2code.com/2016/11/09/r-17-vs-patriot-a-rounding-issue-bugs-in-a-missile-defense-system/
  8. They write the stuff - what we can learn from the way the Space Shuttle software was developed As a youngster I watched the movie "The right stuff" about the Mercury Seven. The film deified the picture of an Astronaut in my mind as some kind of superhero. Many years later I read and just absolutely loved an article written by Charles Fishman in Fast Company in 1996, entitled "They write the stuff", which did something similar for the developers who wrote the code which put those guys up there. Read the article here : https://www.fastcompany.com/28121/they-write-right-stuff I find myself going back to this article time and again when confronted by the question about how many bugs is acceptable in our code, or even just on what is possible in terms of quality. The article explores the code which ran the space shuttle and the processes the team responsible for this code follows. This team is one of very few teams certified to CMM level 5, and the quality of the code is achieved not through intelligence or heroic effort, but by a culture of quality engrained in the processes they have created. Here are the important numbers for the Space Shuttle code: Lines of code 420,000 Known bugs per version 1 Total development cost ($200M) $200,000,000.00 Cost per line of code (1975 value) $476.00 Cost per line of code (2019 inflation adjusted value) $2223.00 Development team (count) +-100 Average Productivity (lines/developer/workday) 8.04 The moral of the story here is that the quality everyone desires is indeed achievable, but we tend to severely underestimate the cost which would go along with this level of quality. I have used the numbers from this article countless times in arguments about the trade-off between quality and cost on my projects. The most important lesson to learn from this case study is that quality seems to depend primarily on the development processes the team follows and the culture of quality which they adhere to. As Fishman points out, in software development projects most people focus on attempting to test quality into the software, and of course as Steve McConnell pointed out this is "like weighing yourself more often" in an attempt to lose weight. Another lesson to be learned is that the process inherently accepts the fact that people will make mistakes. Quality is not ensured by people trying really hard to avoid mistakes, instead it accepts the fact that mistakes will be made and builds the detection and correction of these mistakes into the process itself. This means that if something slips through it is inappropriate to blame any individual for this mistake, the only sensible thing to do is to acknowledge a gap in the process, so when a bug does make it out into the wild the team will focus on how to fix the process instead of trying to lay blame on the person who was responsible for the mistake. Over all this article is a great story which is chock full of lessons to learn and pass on. References The full article is archived here : https://www.fastcompany.com/28121/they-write-right-stuff NASA has a great piece published on the history of the software https://history.nasa.gov/computers/Ch4-5.html Fantastic read- the interview with Tony Macina published in Communications of the ACM in 1984 - http://klabs.org/DEI/Processor/shuttle/shuttle_primary_computer_system.pdf
  9. Being in the User's guide does not always means that it still works :), it is possible that it now only works when you select the option C90 and that this caveat is not well documented ... I don't know. I know with C99 they replaced the front-end with the CLANG based front-end, and kept the old back-end, not sure if this change affects the way the switch is implemented, it could be done in the templates or in the ASN step I guess, so it could go either way. I guess the best option is to try one and check the list file to see what happens ...
  10. Yes and there used to be some "unofficial" pragmas you could use to force it to a specific way of doing this, not sure - with the XC8 2.0 changes they may not be valid any more, but you were able to choose strategies using these. Seems like PIC18 was the stepchild ... This is what it was like in the User Guide
  11. Another example from the Microchip forums ... https://www.microchip.com/forums/m1084035.aspx Similar kinds of problems, commutation of a motor needs to be as fast as possible ...
  12. error: variable has incomplete type 'void' If you are getting this error message trying to compile code that used to work or from some example code you found somewhere it is very likely because of the changes in the XC8 interrupt syntax introduced with V2.0 of XC8. Specifically I am getting this today for my interrupt service routine definition. I used to use the age-old way of doing this in XC8 as follows: // Old way to do this void interrupt myISR(void) { // Old way Interrupt code here } After the changes to the XC8 compiler which were mostly motivated to get better C standard compliance, particularly with C99, the syntax for that function should now use the commonly adopted concept of function declaration-specifier attributes, which traditionally start with 2 underscores and contains either a list of parameters in brackets or have empty brackets if no parameters are present. // New and improved and C99 compliant way to specify an interrupt service routine in XC8 void __interrupt() myISR(void) { // New and improved interrupt code here } This syntax is now also consistent between XC8, XC16 and XC32 Please see this post for more information on how to either work around this or change to the new syntax. https://www.microforum.cc/topic/5-i-used-to-use-to-locate-variables-but-since-xc8-20-this-is-no-longer-working/
  13. Real Programmers don't use Pascal I mentioned this one briefly before in the story of Mel, but this series would not be complete without this one. Real Programmers Don't Use PASCAL Ed Post Graphic Software Systems P.O. Box 673 25117 S.W. Parkway Wilsonville, OR 97070 Copyright (c) 1982 (decvax | ucbvax | cbosg | pur-ee | lbl-unix)!teklabs!ogcvax!gss1144!evp Back in the good old days -- the "Golden Era" of computers, it was easy to separate the men from the boys (sometimes called "Real Men" and "Quiche Eaters" in the literature). During this period, the Real Men were the ones that understood computer programming, and the Quiche Eaters were the ones that didn't. A real computer programmer said things like "DO 10 I=1,10" and "ABEND" (they actually talked in capital letters, you understand), and the rest of the world said things like "computers are too complicated for me" and "I can't relate to computers -- they're so impersonal". (A previous work [1] points out that Real Men don't "relate" to anything, and aren't afraid of being impersonal.) But, as usual, times change. We are faced today with a world in which little old ladies can get computerized microwave ovens, 12 year old kids can blow Real Men out of the water playing Asteroids and Pac-Man, and anyone can buy and even understand their very own Personal Computer. The Real Programmer is in danger of becoming extinct, of being replaced by high-school students with TRASH-80s! There is a clear need to point out the differences between the typical high-school junior Pac-Man player and a Real Programmer. Understanding these differences will give these kids something to aspire to -- a role model, a Father Figure. It will also help employers of Real Programmers to realize why it would be a mistake to replace the Real Programmers on their staff with 12 year old Pac-Man players (at a considerable salary savings). LANGUAGES The easiest way to tell a Real Programmer from the crowd is by the programming language he (or she) uses. Real Programmers use FORTRAN. Quiche Eaters use PASCAL. Nicklaus Wirth, the designer of PASCAL, was once asked, "How do you pronounce your name?". He replied "You can either call me by name, pronouncing it 'Veert', or call me by value, 'Worth'." One can tell immediately from this comment that Nicklaus Wirth is a Quiche Eater. The only parameter passing mechanism endorsed by Real Programmers is call-by-value-return, as implemented in the IBM/370 FORTRAN G and H compilers. Real programmers don't need abstract concepts to get their jobs done: they are perfectly happy with a keypunch, a FORTRAN IV compiler, and a beer. Real Programmers do List Processing in FORTRAN. Real Programmers do String Manipulation in FORTRAN. Real Programmers do Accounting (if they do it at all) in FORTRAN. Real Programmers do Artificial Intelligence programs in FORTRAN. If you can't do it in FORTRAN, do it in assembly language. If you can't do it in assembly language, it isn't worth doing. STRUCTURED PROGRAMMING Computer science academicians have gotten into the "structured programming" rut over the past several years. They claim that programs are more easily understood if the programmer uses some special language constructs and techniques. They don't all agree on exactly which constructs, of course, and the examples they use to show their particular point of view invariably fit on a single page of some obscure journal or another -- clearly not enough of an example to convince anyone. When I got out of school, I thought I was the best programmer in the world. I could write an unbeatable tic-tac-toe program, use five different computer languages, and create 1000 line programs that WORKED. (Really!) Then I got out into the Real World. My first task in the Real World was to read and understand a 200,000 line FORTRAN program, then speed it up by a factor of two. Any Real Programmer will tell you that all the Structured Coding in the world won't help you solve a problem like that -- it takes actual talent. Some quick observations on Real Programmers and Structured Programming: Real Programmers aren't afraid to use GOTOs. Real Programmers can write five page long DO loops without getting confused. Real Programmers enjoy Arithmetic IF statements because they make the code more interesting. Real Programmers write self-modifying code, especially if it saves them 20 nanoseconds in the middle of a tight loop. Programmers don't need comments: the code is obvious. Since FORTRAN doesn't have a structured IF, REPEAT ... UNTIL, or CASE statement, Real Programmers don't have to worry about not using them. Besides, they can be simulated when necessary using assigned GOTOs. Data structures have also gotten a lot of press lately. Abstract Data Types, Structures, Pointers, Lists, and Strings have become popular in certain circles. Wirth (the above-mentioned Quiche Eater) actually wrote an entire book [2] contending that you could write a program based on data structures, instead of the other way around. As all Real Programmers know, the only useful data structure is the array. Strings, lists, structures, sets -- these are all special cases of arrays and and can be treated that way just as easily without messing up your programing language with all sorts of complications. The worst thing about fancy data types is that you have to declare them, and Real Programming Languages, as we all know, have implicit typing based on the first letter of the (six character) variable name. OPERATING SYSTEMS What kind of operating system is used by a Real Programmer? CP/M? God forbid -- CP/M, after all, is basically a toy operating system. Even little old ladies and grade school students can understand and use CP/M. Unix is a lot more complicated of course -- the typical Unix hacker never can remember what the PRINT command is called this week -- but when it gets right down to it, Unix is a glorified video game. People don't do Serious Work on Unix systems: they send jokes around the world on USENET and write adventure games and research papers. No, your Real Programmer uses OS/370. A good programmer can find and understand the description of the IJK305I error he just got in his JCL manual. A great programmer can write JCL without referring to the manual at all. A truly outstanding programmer can find bugs buried in a 6 megabyte core dump without using a hex calculator. (I have actually seen this done.) OS/370 is a truly remarkable operating system. It's possible to destroy days of work with a single misplaced space, so alertness in the programming staff is encouraged. The best way to approach the system is through a keypunch. Some people claim there is a Time Sharing system that runs on OS/370, but after careful study I have come to the conclusion that they are mistaken. PROGRAMMING TOOLS What kind of tools does a Real Programmer use? In theory, a Real Programmer could run his programs by keying them into the front panel of the computer. Back in the days when computers had front panels, this was actually done occasionally. Your typical Real Programmer knew the entire bootstrap loader by memory in hex, and toggled it in whenever it got destroyed by his program. (Back then, memory was memory -- it didn't go away when the power went off. Today, memory either forgets things when you don't want it to, or remembers things long after they're better forgotten.) Legend has it that Seymour Cray, inventor of the Cray I supercomputer and most of Control Data's computers, actually toggled the first operating system for the CDC7600 in on the front panel from memory when it was first powered on. Seymour, needless to say, is a Real Programmer. One of my favorite Real Programmers was a systems programmer for Texas Instruments. One day, he got a long distance call from a user whose system had crashed in the middle of some important work. Jim was able to repair the damage over the phone, getting the user to toggle in disk I/O instructions at the front panel, repairing system tables in hex, reading register contents back over the phone. The moral of this story: while a Real Programmer usually includes a keypunch and lineprinter in his toolkit, he can get along with just a front panel and a telephone in emergencies. In some companies, text editing no longer consists of ten engineers standing in line to use an 029 keypunch. In fact, the building I work in doesn't contain a single keypunch. The Real Programmer in this situation has to do his work with a text editor program. Most systems supply several text editors to select from, and the Real Programmer must be careful to pick one that reflects his personal style. Many people believe that the best text editors in the world were written at Xerox Palo Alto Research Center for use on their Alto and Dorado computers [3]. Unfortunately, no Real Programmer would ever use a computer whose operating system is called SmallTalk, and would certainly not talk to the computer with a mouse. Some of the concepts in these Xerox editors have been incorporated into editors running on more reasonably named operating systems. EMACS and VI are probably the most well known of this class of editors. The problem with these editors is that Real Programmers consider "what you see is what you get" to be just as bad a concept in text editors as it is in women. No, the Real Programmer wants a "you asked for it, you got it" text editor -- complicated, cryptic, powerful, unforgiving, dangerous. TECO, to be precise. It has been observed that a TECO command sequence more closely resembles transmission line noise than readable text [4]. One of the more entertaining games to play with TECO is to type your name in as a command line and try to guess what it does. Just about any possible typing error while talking with TECO will probably destroy your program, or even worse -- introduce subtle and mysterious bugs in a once working subroutine. For this reason, Real Programmers are reluctant to actually edit a program that is close to working. They find it much easier to just patch the binary object code directly, using a wonderful program called SUPERZAP (or its equivalent on non-IBM machines). This works so well that many working programs on IBM systems bear no relation to the original FORTRAN code. In many cases, the original source code is no longer available. When it comes time to fix a program like this, no manager would even think of sending anything less than a Real Programmer to do the job -- no Quiche Eating structured programmer would even know where to start. This is called "job security". Some programming tools NOT used by Real Programmers: FORTRAN preprocessors like MORTRAN and RATFOR. The Cuisinarts of programming -- great for making Quiche. See comments above on structured programming. Source language debuggers. Real Programmers can read core dumps. Compilers with array bounds checking. They stifle creativity, destroy most of the interesting uses for EQUIVALENCE, and make it impossible to modify the operating system code with negative subscripts. Worst of all, bounds checking is inefficient. Source code maintainance systems. A Real Programmer keeps his code locked up in a card file, because it implies that its owner cannot leave his important programs unguarded [5]. THE REAL PROGRAMMER AT WORK Where does the typical Real Programmer work? What kind of programs are worthy of the efforts of so talented an individual? You can be sure that no real Programmer would be caught dead writing accounts-receivable programs in COBOL, or sorting mailing lists for People magazine. A Real Programmer wants tasks of earth-shaking importance (literally!): Real Programmers work for Los Alamos National Laboratory, writing atomic bomb simulations to run on Cray I supercomputers. Real Programmers work for the National Security Agency, decoding Russian transmissions. It was largely due to the efforts of thousands of Real Programmers working for NASA that our boys got to the moon and back before the cosmonauts. The computers in the Space Shuttle were programmed by Real Programmers. Programmers are at work for Boeing designing the operating systems for cruise missiles. Some of the most awesome Real Programmers of all work at the Jet Propulsion Laboratory in California. Many of them know the entire operating system of the Pioneer and Voyager spacecraft by heart. With a combination of large ground-based FORTRAN programs and small spacecraft-based assembly language programs, they can to do incredible feats of navigation and improvisation, such as hitting ten-kilometer wide windows at Saturn after six years in space, and repairing or bypassing damaged sensor platforms, radios, and batteries. Allegedly, one Real Programmer managed to tuck a pattern-matching program into a few hundred bytes of unused memory in a Voyager spacecraft that searched for, located, and photographed a new moon of Jupiter. One plan for the upcoming Galileo spacecraft mission is to use a gravity assist trajectory past Mars on the way to Jupiter. This trajectory passes within 80 +/- 3 kilometers of the surface of Mars. Nobody is going to trust a PASCAL program (or PASCAL programmer) for navigation to these tolerances. As you can tell, many of the world's Real Programmers work for the U.S. Government, mainly the Defense Department. This is as it should be. Recently, however, a black cloud has formed on the Real Programmer horizon. It seems that some highly placed Quiche Eaters at the Defense Department decided that all Defense programs should be written in some grand unified language called "ADA" (registered trademark, DoD). For a while, it seemed that ADA was destined to become a language that went against all the precepts of Real Programming -- a language with structure, a language with data types, strong typing, and semicolons. In short, a language designed to cripple the creativity of the typical Real Programmer. Fortunately, the language adopted by DoD has enough interesting features to make it approachable: it's incredibly complex, includes methods for messing with the operating system and rearranging memory, and Edsgar Dijkstra doesn't like it [6]. (Dijkstra, as I'm sure you know, was the author of "GoTos Considered Harmful" -- a landmark work in programming methodology, applauded by Pascal Programmers and Quiche Eaters alike.) Besides, the determined Real Programmer can write FORTRAN programs in any language. The real programmer might compromise his principles and work on something slightly more trivial than the destruction of life as we know it, providing there's enough money in it. There are several Real Programmers building video games at Atari, for example. (But not playing them. A Real Programmer knows how to beat the machine every time: no challange in that.) Everyone working at LucasFilm is a Real Programmer. (It would be crazy to turn down the money of 50 million Star Wars fans.) The proportion of Real Programmers in Computer Graphics is somewhat lower than the norm, mostly because nobody has found a use for Computer Graphics yet. On the other hand, all Computer Graphics is done in FORTRAN, so there are a fair number people doing Graphics in order to avoid having to write COBOL programs. THE REAL PROGRAMMER AT PLAY Generally, the Real Programmer plays the same way he works -- with computers. He is constantly amazed that his employer actually pays him to do what he would be doing for fun anyway, although he is careful not to express this opinion out loud. Occasionally, the Real Programmer does step out of the office for a breath of fresh air and a beer or two. Some tips on recognizing real programmers away from the computer room: At a party, the Real Programmers are the ones in the corner talking about operating system security and how to get around it. At a football game, the Real Programmer is the one comparing the plays against his simulations printed on 11 by 14 fanfold paper. At the beach, the Real Programmer is the one drawing flowcharts in the sand. A Real Programmer goes to a disco to watch the light show. At a funeral, the Real Programmer is the one saying "Poor George. And he almost had the sort routine working before the coronary." In a grocery store, the Real Programmer is the one who insists on running the cans past the laser checkout scanner himself, because he never could trust keypunch operators to get it right the first time. THE REAL PROGRAMMER'S NATURAL HABITAT What sort of environment does the Real Programmer function best in? This is an important question for the managers of Real Programmers. Considering the amount of money it costs to keep one on the staff, it's best to put him (or her) in an environment where he can get his work done. The typical Real Programmer lives in front of a computer terminal. Surrounding this terminal are: Listings of all programs the Real Programmer has ever worked on, piled in roughly chronological order on every flat surface in the office. Some half-dozen or so partly filled cups of cold coffee. Occasionally, there will be cigarette butts floating in the coffee. In some cases, the cups will contain Orange Crush. Unless he is very good, there will be copies of the OS JCL manual and the Principles of Operation open to some particularly interesting pages. Taped to the wall is a line-printer Snoopy calender for the year 1969. Strewn about the floor are several wrappers for peanut butter filled cheese bars (the type that are made stale at the bakery so they can't get any worse while waiting in the vending machine). Hiding in the top left-hand drawer of the desk is a stash of double stuff Oreos for special occasions. Underneath the Oreos is a flow-charting template, left there by the previous occupant of the office. (Real Programmers write programs, not documentation. Leave that to the maintainence people.) The Real Programmer is capable of working 30, 40, even 50 hours at a stretch, under intense pressure. In fact, he prefers it that way. Bad response time doesn't bother the Real Programmer -- it gives him a chance to catch a little sleep between compiles. If there is not enough schedule pressure on the Real Programmer, he tends to make things more challenging by working on some small but interesting part of the problem for the first nine weeks, then finishing the rest in the last week, in two or three 50-hour marathons. This not only inpresses his manager, who was despairing of ever getting the project done on time, but creates a convenient excuse for not doing the documentation. In general: No Real Programmer works 9 to 5. (Unless it's 9 in the evening to 5 in the morning.) Real Programmers don't wear neckties. Real Programmers don't wear high heeled shoes. Real Programmers arrive at work in time for lunch. [9] A Real Programmer might or might not know his wife's name. He does, however, know the entire ASCII (or EBCDIC) code table. Real Programmers don't know how to cook. Grocery stores aren't often open at 3 a.m., so they survive on Twinkies and coffee. THE FUTURE What of the future? It is a matter of some concern to Real Programmers that the latest generation of computer programmers are not being brought up with the same outlook on life as their elders. Many of them have never seen a computer with a front panel. Hardly anyone graduating from school these days can do hex arithmetic without a calculator. College graduates these days are soft -- protected from the realities of programming by source level debuggers, text editors that count parentheses, and user friendly operating systems. Worst of all, some of these alleged computer scientists manage to get degrees without ever learning FORTRAN! Are we destined to become an industry of Unix hackers and Pascal programmers? On the contrary. From my experience, I can only report that the future is bright for Real Programmers everywhere. Neither OS/370 nor FORTRAN show any signs of dying out, despite all the efforts of Pascal programmers the world over. Even more subtle tricks, like adding structured coding constructs to FORTRAN have failed. Oh sure, some computer vendors have come out with FORTRAN 77 compilers, but every one of them has a way of converting itself back into a FORTRAN 66 compiler at the drop of an option card -- to compile DO loops like God meant them to be. Even Unix might not be as bad on Real Programmers as it once was. The latest release of Unix has the potential of an operating system worthy of any Real Programmer. It has two different and subtly incompatible user interfaces, an arcane and complicated terminal driver, virtual memory. If you ignore the fact that it's structured, even C programming can be appreciated by the Real Programmer: after all, there's no type checking, variable names are seven (ten? eight?) characters long, and the added bonus of the Pointer data type is thrown in. It's like having the best parts of FORTRAN and assembly language in one place. (Not to mention some of the more creative uses for #define.) No, the future isn't all that bad. Why, in the past few years, the popular press has even commented on the bright new crop of computer nerds and hackers ([7] and [8]) leaving places like Stanford and M.I.T. for the Real World. From all evidence, the spirit of Real Programming lives on in these young men and women. As long as there are ill-defined goals, bizarre bugs, and unrealistic schedules, there will be Real Programmers willing to jump in and Solve The Problem, saving the documentation for later. Long live FORTRAN! ACKNOWLEGEMENT I would like to thank Jan E., Dave S., Rich G., Rich E. for their help in characterizing the Real Programmer, Heather B. for the illustration, Kathy E. for putting up with it, and atd!avsdS:mark for the initial inspriration. REFERENCES [1] Feirstein, B., Real Men Don't Eat Quiche, New York, Pocket Books, 1982. [2] Wirth, N., Algorithms + Datastructures = Programs, Prentice Hall, 1976. [3] Xerox PARC editors . . . [4] Finseth, C., Theory and Practice of Text Editors - or - a Cookbook for an EMACS, B.S. Thesis, MIT/LCS/TM-165, Massachusetts Institute of Technology, May 1980. [5] Weinberg, G., The Psychology of Computer Programming, New York, Van Nostrabd Reinhold, 1971, page 110. [6] Dijkstra, E., On the GREEN Language Submitted to the DoD, Sigplan notices, Volume 3, Number 10, October 1978. [7] Rose, Frank, Joy of Hacking, Science 82, Volume 3, Number 9, November 1982, pages 58 - 66. [8] The Hacker Papers, Psychology Today, August 1980. [9] Datamation, July, 1983, pp. 263-265.
  14. I have a fairly general question for you all. I keep on running into situations where I need to mix C and ASM. Sometimes this works out easily by just using some asm("") instructions in the middle of my function, but sometimes I really feel like I could benefit from writing a function in ASM and calling it from C. I think my best example of this is the implementation of cryptographic functions such as AES or SHA. For these I see sometimes a 2x or even 3x speed improvement over what the compiler produces and I need to use these from more than one place so I really need a C function I can call to do these but I reallly need to implement it in ASM. Whenever I ask about mixing C and ASM I am told just not to do it, but it still seems to me that there are a lot of situations where this really is the best way to go? I recall a converstation with @holdmybeer where he needed very precise timing and the optimizer would always change the timing depending how the banks ended up being laid out (adding or removing bank switches), where implementing a function in ASM also seemed to be the solution. So I would like to get some opinions on this, do you guys agree? Any thoughts on this? PS. We recently had a related discussion about calculating parity as another example.
  15. Orunmila

    Melbourne Weather

    Oh wow, that is quite a change, althoug the scale does make it look worse 😉 I lived in Phoenix for 7 years, we saw 50 degrees at least once a year, something to experience, but in Melbourne the humidity would be the thing to kill you!
  16. That is a nice exercise, especially trying to do random numbers in asm is harder than people think, pn-sequences are really by far the best way to do that. Our first challenging excercise was building a bootloader, this was on a 8031 of course 🙂 I think universities fail if they do not update their material. I know the fundamentals are still the same, but learning on ancient tools takes away a lot from the value you can get. Experience with the tools is sometimes just as valuable as foundations, that is after all why you have labs and exercises in the first place is it not?
  17. Orunmila


    The idea is that we all learn from each other. The community is brand new right now, so any contribution will help getting the conversation going, discuss the weather if that floats your boat 🚣‍♀️ 🙂
  18. Yes I love Whitespace! That is a great example. Also LOLCODE is another great one! KTHXBYE
  19. Abandon all sanity, ye who enter here! (The section above is of course Dante's description of the inscription to the gates of Hell) Computers work in essentially the same way, executing instructions, moving data around, etc. Programming languages are mere abstractions allowing us to tell the same computer how to do the same things using different "words and methods". These abstractions provided by languages like C, C++ or even Java, GoLang or LISP were created on the back of many years of research and when we learn to program we seldom spend the time to understand WHY the language works the way that it does, focussing rather on HOW to use it. Some concepts are best explored by imagining what life would be like if the language we were using to program looked very different. For example every embedded programmer I ask has told me that GOTO is bad, but very seldom could someone actually explain why, and it would be even more rare to find someone who was familiar with Dijkstra's paper on the subject. (see this page for that one and many more examples). I have struggled to find a better explanation of what Dijkstra was trying to convey than the "COME FROM" statement in INTERCAL! The following article appeared in Computer Shopper (the British magazine of that name, not the Ziff-Davis title) around September 1992. Intercal -- the Language From Hell Where would we be without effective programming languages? Wait, don't turn the page; that was a rhetorical question. (The answer isn't: word-processing on slide rules.) Every once in a while it's worth thinking about the matter. Most of us (the lucky ones, that is) don't come within spitting distance of programming from one day to the next. Those who do, usually don't give any thought to them: it's time to write that function so do it, in C or Cobol or dBase whatever comes to hand. We don't waste time contemplating the deeper significance of the tools we use to do the job. But what would we do without well-designed, efficient programming languages? Go back to slide rules, maybe. A computer, in a very real sense, is its language. The processor speaks in a native tongue, its own machine code: unreadable by humans and next to impossible to program in, but nevertheless essential. It's the existence of this language which makes the computer such a general purpose tool; a hardware platform upon which we layer the abstractions, the virtual machines, of our operating systems. Which are, when you stop to think about it, simply a set of grammatically correct expressions in the computer's internal language. Take the humble 386 PC for example. Feed it one set of instructions and it pretends to be a cute little Windows machine; feed it another set, and something hairy happens -- it turns into a big, bad file server and starts demanding passwords and issuing threats. This flexibility is not an accident. Almost from the start, the feature which distinguished computers from their complex calculating predecessors was the fact that they were programmable -- indeed, universally programmable, capable of tackling any computationally tractable problem. Strip away the programmability of the machine and you lose the universality. Not to mention the usefulness. Programming languages, as any fule kno, originated in the nineteen-fifties as a partial solution to a major difficulty; machine code is difficult to write and maintain and next to impossible to move (or port) from one computer to another of a different type. Assemblers, which allowed the programmers to write machine code using relatively simple mnemonic commands (which were then "assembled" into machine code) were an early innovation, but still failed to address the portability issue. The solution that finally took off was the concept of an abstract "high level" language: and the first to achieve widespread use was Fortran. A high level language is an artificial language with a rigidly specified grammar, which can be translated into machine code via a program called a "compiler". Statements in the high level language may correspond to individual statements, or whole series of statements, in the machine language ofthe computer. Indeed, the only thing that makes compilation practical is the fact that all computer language systems are effectively equivalent; any algorithm which can be expressed in one language can be solved in any other, in principle. So why are there so many languages? A load of old cobol'ers There are several answers to the question of language proliferation. Besides the facetious (some of us don't like Cobol) and the obvious (designing languages looks like a Fun Thing to a certain type of warped personality), there's the pragmatic reason. Simply put, some languages are excellent for expressing certain types of problem, but are lousy at dealing with other situations. Take Prolog, for example. Prolog is a brilliant language for resolving formal logic propositions expressable in the first order predicate calculus, but you'd need your head examining if you tried to write an operating system in it. (The Japanese MITI tried to do something similar with their Fifth Generation Project, and when was the last time you heard of them? Right.) Alternatively, take C. C is a wonderful language. It combines the flexibility and speed of raw machine code with the readability of ... er. Yes, you can re-jig a C compiler to run on anything. You can fine-tune it to produce tight, fast machine code that will execute on your toaster. Yes, you can write wonderful device drivers in it. But, again, you'd need your head examining if you set out to write a humongous database application in it. That's what Cobol is for; or SQL. So to the point of this article ... INTERCAL. An icky mess INTERCAL is a programming language which is to other languages as elephants are to deep-sea fishing -- it has nothing whatsoever to do with them. Nothing ... except that it's a programming language. And it serves as a wonderful example of the fact that, despite their theoretical, abstract interchangeability, not all languages are born equal. Here's a potted history of the offender: INTERCAL is short for "Computer Language With No Readily Pronouncable Acronym". It was not so much designed as perpetrated at Princeton University, on the morning of May 26th, 1972, by Donald R. Woods and James M. Lyon. They have been trying to live it down ever since. The current implementation, C-INTERCAL, was written by Eric S. Raymond (also the author of The New Hacker's Dictionary), and -- god help us -- runs on anything that supports C (and the C programming tools lex and yacc). INTERCAL is, in its own terms, elegant, simple and concise. It is also flexible and forgiving; if the compiler (called, appropriately enough, ick) encounters something it doesn't understand it simply ignores it and carries on. In order to insert a comment, just type in something that ick thinks is wrong; but be careful not to embed any valid INTERCAL code in your comments, or ick will get all excited and compile it. There are only two variable types: 16-bit integers, and 32-bit integers, denoted by .1 (for the 16-bit variable called "1") or :1 (for the 32-bit variable named "1"); note that .1 is not equivalent to :1 and definitely has nothing to do with 0.1 (unless it happens to be storing a value of 0.1). INTERCAL supports two unique bitwise operators, interleave (or mingle) and select, denoted by the "^" and "~" symbols respectively. You interleave two variables by alternating the bits in the two operands; you select two variables by taking from the first operand whichever bits correspond to 1's in the second operand, then pack these bits to the right in the result. There are also numerous unary bitwise operators, and in order to resolve matters of precedence the pedantic may use sparks (') or rabbit-ears (") to group expressions (Don't blame me for the silly names: INTERCAL has a character set which is best described as creative.) It is not surprising that these operators are unique to INTERCAL; the parlous readability of C would not be enhanced by the addition of syntax like: PLEASE DO IGNORE .1 <-".1^C'&:51~"#V1^C!12~;&75SUB"V'V.1~ Like any other language, INTERCAL has flow-of-control statements and input and output statements. To write something or other into a variable, you need the WRITE IN list statement, where list is a string of variables and/or array elements. The format of the input data should be as numbers, the digits of which are spelt out in english in the range ZERO (or OH) to FOUR TWO NINE FOUR NINE SIX SEVEN TWO NINE FIVE. To output some information, you need the READ OUT list statement, where list again consists of variables. Numbers are printed, by default, in the form of "extended" Roman numerals (the syntax of which I will draw a merciful veil over), although the scholarly may make use of the standard library, which contains routines for formatting output in Sanskrit. Like FORTRAN, INTERCAL uses line numbers which are optional and follow in no particular order. Unlike FORTRAN, INTERCAL has no evil, unspeakable GOTO command, and not even an IF statement. However, you would be wrong to ascribe this to INTERCAL being designed for structured programming; it is actually because C-INTERCAL is the only known language that implements the legendary COME FROM ... control statement (originally described by R. L. Clark in "A Linguistic contribution to GOTO-less programming", Comm. ACM 27 (1984), pp. 349-350). For many years the GOTO statement -- once the primary means of controlling the sequence of execution of a program -- has been reviled as contributing to unreadable, unpredictable code that is virtually impossible to follow because it jumps about the place like a kangaroo on amphetamines. The COME FROM statement enables INTERCAL to do away with nasty GOTO's, while still preserving for the programmer that sense of warm self-esteem and achievement that comes from successfully writing a really nasty piece of self-modifying code involving computed GOTO's in FORTRAN (Or, for the terminally hip and hacker-ish, involving a triple-indirect pointer to a union of UNIX kernel data structures in C. Capisce?) Basically, the COME FROM statement specifies a line, or lines, which -- when the program executes them -- will jump to the COME FROM: effectively the opposite of GOTO. Because INTERCAL contains no equivalent of the NEXT statement for controlling whether or not some statement is executed, it provides a creative, endearing and unique substitute; abstention. For example, you can abstain from executing a given line of code with the ABSTAIN FROM (label) form of the command. Alternatively, and more uselessly, you can abstain from executing all statements of a specified type; for example, you can say PLEASE ABSTAIN FROM IGNORING + FORGETTING or DO ABSTAIN FROM ABSTAINING The abstention command is revoked by the REINSTATE statement. It is a Bad Idea to ABSTAIN FROM REINSTATING. It is also worth noting that the ABSTAIN syntax is rather confusing; for example, DO ABSTAIN FROM GIVING UP is not accepted, although a valid synonym for this is DON'T GIVE UP. (GIVE UP is the statement that terminates execution of a program. You are encouraged to use this statement at every opportunity. It is your only hope of escape.) The designers of the INTERCAL language believed that source code should be easy to understand or, failing that, friendly; in extremis, good old-fashioned politeness will do. Consequently, the syntax of the language looks a bit odd at first. After the line label (if any) there should be a statement identifier; this can be one of DO, PLEASE, or PLEASE DO. Programs which are insufficiently polite to the compiler may be rejected with the error message PROGRAMMER IS INSUFFICIENTLY POLITE; likewise, programs which spend too much time grovelling to the compiler will be terminated with extreme prejudice. The DO, PLEASE or PLEASE DO is then followed by (optionally) one of NOT, N'T, or %n, then a statement: the NOT or N'T's meaning should be self-evident, while the %n is merely the percentage probability that the following statement will be executed. Of course, with only two binary and three unary operators, it is rather difficult for programmers to get to grips with the language. Therefore, in a fit of quite uncharacteristic generosity, the designers have supplied -- and even partially documented -- a library of rather outre subroutines that carry out such esoteric operations as addition, subtraction, logical comparison, and generation of random numbers. The subroutines will be charmingly familiar to those PDP-11 assembly-language hackers among you who are also fluent in FORTRAN and SPITBOL. Why you should program in Intercal INTERCAL, despite being afflicted with these unique features (which, arguably, should remain that way) has survived for twenty years and is indeed thriving. The relatively recent C-INTERCAL dialect (which introduced the COME FROM statement, among other things) has spread it far beyond its original domain; the infamy continues, transmitted like a malign influence across the Internet. It is a matter of record that no computer language has ever succeeded in gaining widespread acceptance on the basis of elegance, comprehensibility or necessity. Look at FORTRAN, Cobol or C; all of them spread despite the fact that better alternatives were available. In fact, there are reasons to believe that INTERCAL is the language of the future. Firstly, INTERCAL creates jobs. Yes, it's true. In general, if a particular programming tool is unfriendly, ugly, and absolutely essential, the response of management is to throw more programmers at it instead of replacing it. INTERCAL is so monumentally difficult to use for anything sensible that it is surely destined to be the cause of full employment, massive increases in the DP budget of every company where it is used, and general prosperity for programmers. Secondly, once you have learned INTERCAL you will be able to win friends and influence people. As the authors point out, if you were to state that the simplest way to store a value of 65536 in a 32-bit variable is DO :1 <- #0$#256 any sensible programmer would say that this was absurd. Since this is indeed the simplest way of doing this, they'd be made to look like a fool in front of their boss (who, by Murphy's law, would have turned up at just that moment). This will have a devastating effect on their ego and simultaneously make you look brilliant (until they realise that you cribbed this example from the manual, like me. Deep shame.) Thirdly, INTERCAL helps sell hardware. There's been a lot of fuss recently about big corporations inventing gigantic, bloated windowing and operating systems that require monstrously powerful computers to run on; some cynics suggest that this is because personal computers are now so powerful that they can already do anything any reasonable individual would want them to, and it's necessary to invent warm furry buttons that soak up millions of processor cycles in order to sell new equipment. With INTERCAL, you no longer need Windows or OS/2 to slow your 486 PC to a crawl! A Sieve of Eratosthenes benchmark (that computes all the prime numbers less than 65536), when coded in INTERCAL, clocked over seventeen hours on a SPARCStation-1. The same program, in C, took less than 0.5 seconds -- thus proving, quite clearly, that INTERCAL software is so slow that you absolutely must buy a CRAY-3 or equivalent in order to have any hope of getting any work out of it. Consequently, it is quite clear that INTERCAL represents a major alternative to conventional programming languages. Anyone who learns this language is going to go far -- at least as far as the nearest psychiatric institution. Rumour has it that INTERCAL was not entirely unrelated to the collapse of the Soviet Union and the success of the Apollo missions. It is even reported to improve your sex life and restore hair loss! So we warmly advise you to take this utterly unbiased report at face value and go forth and get your friendly neighbourhood software company to switch to INTERCAL, wherever or whatever they may be using right now. You know it makes sense ... References The INTERCAL Resources Page INTERCAL on WikiPedia Intercal on Rosettacode The INTERCAL reference manual
  20. Whenever I start a new project I always start off reaching for a simple while(1) "superloop" architecture https://en.wikibooks.org/wiki/Embedded_Systems/Super_Loop_Architecture . This works well for doing the basics but more often than not I quickly end up short and looking to employ a timer to get some kind of scheduling going. MCC makes this pretty easy and convenient to set up. It contains a library called "Foundation Services" which has 2 different timer implementations, TIMEOUT and RTCOUNTER. These two library modules have pretty much the same interface, but they are implemented very differently under the hood. For my little "Operating System" I am going to prefer the RTCOUNTER version as keeping time accurately is more important to me than latency. The TIMEOUT module is capable of providing low latency reaction times whenever a timer expires by adjusting the timer overflow point so that an interrupt will occur right when the next timer expires. This allows you to use the ISR to call the action you want to happen directly and immediately from the interrupt context. Nice as that may be in some cases, it always increases the complexity, the code size and the overall cost of the system. In our case RTCOUNTER is more than good enough so we will stick with that. RTCOUNTER Operation First a little bit more about RTCOUNTER. In short RTCOUNTER keeps track of a list of running timers. Whenever you call the "check" function it will compare the expiry time of the next timer and call the task function for that timer if it has expired. It achieves this by using a single hardware timer which will operate in "Free Running" mode. This means the hardware timer will never be "re-loaded" by the code, it will simply overflow back to it's starting value naturally, and every time this happens the module will count that another overflow has happened in an overflow counter. The count of the timer is made up by a combination of the actual hardware timer and the overflow counter. By "hiding" or abstracting the real size of the hardware timer like this the module can easily be switched over to use any of the PIC timers, regardless if they count up or down or how many bits they implement in hardware. Mode 32-bit Timer value General (32-x)-bit of g_rtcounterH x-bit Hardware Timer Using TMR0 in 8-bit mode 24-bits of g_rtcounterH TMR0 (8-bit) Using TMR1 16-bits of g_rtcounterH TMR1H (8-bit) TMR1L (8-bit) RTCOUNTER is compatible with all the PIC timers, and you can switch it to a different timer later without modifying your application code which is nice. Since all that happens when the timer overflows is updating the counter the Timer ISR is as short and simple as possible. // We only increment the overflow counter and clear the flag on every interrupt void rtcount_isr(void) { g_rtcounterH++; PIR4bits.TMR1IF = 0; } When we run the "check" function it will construct the 32-bit time value by combining the hardware timer and the overflow counter (g_rtcounterH). It will then compare this value to the expiry time of the next timer in the list to expire. By keeping the list of timers sorted by expiry time it saves time during the checking (which happens often) by doing the sorting work during creation of the timer (which happens infrequently). How to use it Using it is failry straight-forward. Create a callback function which returns the "re-schedule" time for the timer. Allocate memory for your timer/task and tie it to your callback function Create the timer (which starts it) specifying how long before it will first expire Regularly call the check function to check if the next timer has expired, and call it's callback if it has. In C the whole program may look something like this example: #include "mcc_generated_files/mcc.h" int32_t ledFlasher(void* p); rtcountStruct_t myTimer = {ledFlasher}; void main(void) { SYSTEM_Initialize(); INTERRUPT_GlobalInterruptEnable(); INTERRUPT_PeripheralInterruptEnable(); rtcount_create(&myTimer, 1000); // Create a new timer using the memory at &myTimer // This is my main Scheduler or OS loop, all tasks are executed from here from now on while (1) { // Check if the next timer has expired, call it's callback from here if it did rtcount_callNextCallback(); } } int32_t ledFlasher(void* p) { LATAbits.RA0 ^= 1; // Toggle our pin return 1000; // When we are done we want to restart this timer 1000 ticks later, return 0 to stop } Example with Multiple Timers/Tasks Ok, admittedly blinking an LED with a timer is not rocket science and not really impressive, so let's step it up and show how we can use this concept to write an application which is more event-driven than imperative. NOTE: If you have not seen it yet I recommend reading Martin Fowler's article on Event Sourcing and how this design pattern reduces the probability of errors in your system on his website here. By splitting our program into tasks (or modules) which each perform a specific action and works independently of other tasks, we can generate code modules which are completely independent and re-usable quite easily. Independent and re-usable (or mobile as Uncle Bob says) does not only mean that the code is maintainable, it also means that we can test and debug each task by itself, and if we do this well it will make the code much less fragile. Code is "fragile" when you are fixing something in one place and something seemingly unrelated breaks elsewhere ... that will be much less likely to happen. For my example I am going to construct some typical tasks which need to be done in an embedded system. To accomplish this we will Create a task function for each of these by creating a timer for it. Control/set the amount of CPU time afforded to each task by controlling how often the timer times out Communicate between tasks only through a small number of shared variables (this is best done using Message Queue's - we will post about those in a later blog some time) Let's go ahead and construct our system. Here is the big picture view This system has 6 tasks being managed by the Scheduler/OS for us. Sampling the ADC to check the battery level. This has to happen every 5 seconds Process keys, we are looking at a button which needs to be de-bounced (100 ms) Process serial port for any incoming messages. The port is on interrupt, baud is 9600. Our buffer is 16 bytes so we want to check it every 10ms to ensure we do not loose data. Update system LCD. We only update the LCD when the data has changed, we want to check for a change every 100ms Update LED's. We want this to happen every 500ms Drive Outputs. Based on our secret sauce we will decide when to toggle some pins, we do this every 1s These tasks will work together, or co-operate, by keeping to the promise never to run for a long time (let's agree 10ms is a long time, tasks taking longer than that needs to be broken into smaller steps). This arrangement is called Co-operative Multitasking . This is a well-known mechanism of multi-tasking on a microcontroller, and has been implemented in systems like "Windows 3.1" and "Windows 95" as well as "Classic Mac-OS" in the past. By using the Scheduler and event driven paradigm here we can implement and test each of these subsystems independently. Even when we have it all put together we can easily replace one of these subsystems with a "Test" version of it and use that to generate test conditions for us to ensure everything will work correctly under typical operation conditions. We can "disable" any part of the system by simply commenting out the "create" function for that timer and it will not run. We can also adjust how often things happen or adjust priorities by modifying the task time values. As before we first allocate some memory to store all of our tasks. We will initialize each task with a pointer to the callback function used to perform this task as before. The main program= ends up looking something like this. void main(void) { SYSTEM_Initialize(); INTERRUPT_GlobalInterruptEnable(); INTERRUPT_PeripheralInterruptEnable(); rtcount_create(&adcTaskTimer, 5000); rtcount_create(&keyTaskTimer, 100); rtcount_create(&serialTaskTimer, 10); rtcount_create(&lcdTaskTimer, 100); rtcount_create(&ledTaskTimer, 500); rtcount_create(&outputTaskTimer, 1000); // This is my main Scheduler or OS loop, all tasks are executed from events while (1) { // Check if the next timer has expired, call it's callback from here if it did rtcount_callNextCallback(); } } As always the full project incuding the task functions and the timer variable declarations can be downloaded. The skeleton of this program which does initialize the other peripherals, but runs the timers completely compiles to only 703 words of code or 8.6% on this device, and it runs all 6 program tasks using a single hardware timer.
  21. The King's Toaster Anonymous Once upon a time, in a kingdom not far from here, a king summoned two of his advisors for a test. He showed them both a shiny metal box with two slots in the top, a control knob and a lever. "What do you think this is?" One advisor, an engineer, answered first. "It is a toaster," he said. The king asked, "How would you design an embedded computer for it?" The engineer replied, "Using a four-bit microcontroller, I would write a simple program that reads the darkness knob and quantizes its position to one of 16 shades of darkness, from snow white to coal black. The program would use that darkness level as the index to a 16-element table of initial timer values. Then it would turn on the heating elements and start the timer with the initial value selected from the table. At the end of the time delay, it would turn off the heat and pop up the toast. Come back next week, and I'll show you a working prototype." The second advisor, a computer scientist, immediately recognized the danger of such short-sighted thinking. He said, "Toasters don't just turn bread into toast, they are also used to warm frozen waffles. What you see before you is really a breakfast food cooker. As the subjects of your kingdom become more sophisticated, they will demand more capabilities. They will need a breakfast food cooker that can also cook sausage, fry bacon, and make scrambled eggs. A toaster that only makes toast will soon be obsolete. If we don't look to the future, we will have to completely redesign the toaster in just a few years. With this in mind, we can formulate a more intelligent solution to the problem. First, create a class of breakfast foods. Specialize this class into subclasses: grains, pork and poultry. The specialization process should be repeated with grains divided into toast, muffins, pancakes and waffles; pork divided into sausage, links and bacon; and poultry divided into scrambled eggs, hard-boiled eggs, poached eggs, fried eggs, and various omelet classes. The ham and cheese omelet class is worth special attention because it must inherit characteristics from the pork, dairy and poultry classes. Thus, we see that the problem cannot be properly solved without multiple inheritance. At run time, the program must create the proper object and send a message to the object that says, 'Cook yourself'. The semantics of this message depend, of course, on the kind of object, so they have a different meaning to a piece of toast than to scrambled eggs. Reviewing the process so far, we see that the analysis phase has revealed that the primary requirement is to cook any kind of breakfast food. In the design phase, we have discovered some derived requirements. Specifically, we need an object-oriented language with multiple inheritance. Of course, users don't want the eggs to get cold while the bacon is frying, so concurrent processing is required, too. We must not forget the user interface. The lever that lowers the food lacks versatility and the darkness knob is confusing. Users won't buy the product unless it has a user-friendly, graphical interface. When the breakfast cooker is plugged in, users should see a cowboy boot on the screen. Users click on it and the message 'Booting UNIX v. 8.3' appears on the screen. (UNIX 8.3 should be out by the time the product gets to the market.) Users can pull down a menu and click on the foods they want to cook. Having made the wise decision of specifying the software first in the design phase, all that remains is to pick an adequate hardware platform for the implementation phase. An Intel 80386 with 8MB of memory, a 30MB hard disk and a VGA monitor should be sufficient. If you select a multitasking, object oriented language that supports multiple inheritance and has a built-in GUI, writing the program will be a snap. (Imagine the difficulty we would have had if we had foolishly allowed a hardware-first design strategy to lock us into a four-bit microcontroller!)." The king had the computer scientist thrown in the moat, and they all lived happily ever after.
  22. Orunmila


    Another couple of blogs went up today, please check the 2 areas for the latest editions! If you are enjoying these and finding yourself coming back to look for more please do follow the blog, that way you are subscribed for updates whenever a new edition is posted. You should find a "follow" link you can click at the tip right of the Blog page. You can find the embedded programming blog here And the programming lore blog here
  23. Version 1.0.0


    This zip file contains the project implementing the full RTCOUNTER example project.
  24. Today’s Lore blog is about Edsger Dijkstra. One of my all time favorites!
  • Create New...