Jump to content
 

Blogs

Featured Entries

  • N9WXU

    Tokenizing Keywords - Part 1

    By N9WXU

    This is the first of a 5 part article where I will explore some different ways to tokenize keywords.  This is a simple and common task that seems to crop up when you least expect it.  We have all probably done some variation of the brute force approach in this first posting, but the posts that follow should prove interesting.  Here is the sequence: Part 1 : STRCMP Brute Force and the framework Part 2 : IF-ELSE for speed Part 3 : Automating the IF-ELSE for maintenance Part 4 :
    • 0 comments
    • 136 views
  • Orunmila

    How long is a nanosecond?

    By Orunmila

    Exactly how long is a nanosecond? This Lore blog is all about standing on the shoulders of giants. Back in February 1944 IBM shipped the Harvard Mark 1 to Harvard University. It looked like this: The Mark I was a remarkable machine at the time, it could perform addition in 1 cycle (which took roughly 0.3 seconds) and multiplication in 20 cycles or 6 seconds. Calculating sin(x)  would run up to 60 seconds (1 minute). The team that ran this Electromechanical computer had o
    • 0 comments
    • 133 views
  • Orunmila

    Epigrams on Programming

    By Orunmila

    Epigrams on Programming Alan J. Perlis Yale University This text has been published in SIGPLAN Notices Vol. 17, No. 9, September 1982, pages 7 - 13.  The phenomena surrounding computers are diverse and yield a surprisingly rich base for launching metaphors at individual and group activities. Conversely, classical human endeavors provide an inexhaustible source of metaphor for those of us who are in labor within computation. Such relationships between society and device are no
    • 0 comments
    • 118 views
  • Orunmila

    The Ballmer Peak

    By Orunmila

    If you are going to be writing any code you can probably use all the help you can get, and in that line you better be aware of the "Ballmer Peak". Legend has it that drinking alcohol impairs your ability to write code, BUT there is a curious peak somewhere in the vicinity of a 0.14 BAC where programmers attain almost super-human programming skill. The XKCD below tries to explain the finer nuances. But seriously many studies have shown that there is some truth to this in the sense that
    • 0 comments
    • 98 views
  • Orunmila

    Using the CDC Serial port on the PIC18F47K40 Xpress Evaluation Board

    By Orunmila

    I recently got my hands on a brand new PIC18F47K40 Xpress board (I ordered one after we ran into the Errata bug here a couple of weeks ago). I wanted to start off with a simple "Hello World" app which would use the built-in CDC serial port, which is great for doing printf debugging with, and interacting with the board in general since it has no LED's and no display to let me know that anything is working, but immediately I got stuck. Combing the user's guide I could not find any mention of
    • 0 comments
    • 118 views

Our community blogs

  1. N9WXU
    Latest Entry

    By N9WXU,

    It has been said that software is the most complicated system humanity has created and like all complicated things we manage this complexity by breaking the problem into small problems.  Each small problem is then solved and each solution is assembled to create larger and larger solutions.  In other fields of work, humans created standardized solutions to common problems.  For example, nails and screws are common solutions to the problem of fastening wood together.  Very few carpenters worry about the details of building nails and screws, they simply use them as needed.  This practice of creating  common solutions to typical problems is also done in software.  Software Libraries can easily be used to provide drivers, and advanced functions saving a developer many hours of effort.

    To make a software library useful, the library developer must create an abstraction of the problem solved by the library.  This abstraction must interact with the library user in a simple way and hide the specialist details of the problem.  For example, if your task is to convert RGB color values into CMYK color values, you would get a library that had a function such as this one:

    struct cmyk {
    	float cyan;
    	float magenta;
    	float yellow;
    	float black;
    };
    
    struct rgb {
    	float red;
    	float green;
    	float blue;
    };
    
    struct cmyk make_CMYK_from_RGB(struct rgb);

    This seems very simple and it would absolutely be simple to use.  But, if you had to implement such a function yourself you may quickly find your self immersed in color profiles and the behavior of the human eye.  All of this complexity is hidden behind a simple function.

    In the embedded world we often work with hardware and we are very used to silicon vendors providing hardware abstraction layers.  These hardware abstraction layers are an attempt to simplify the use use of a vendors hardware and to make it more complicated to switch the hardware to a competing system.  Let us go into a little more detail.

    Here is a typical software layer cake as drawn by a silicon vendor.  Often they will provide the bottom 3 layers and even a few demo applications.  The hope is you will create your application using their libraries.  The benefit for you is a significant time savings (you don't need to make your nails and screws).  The benefit to the silicon vendor is getting you locked into a proprietary solution.

    image.png

    Here is a short story about the early "dark ages" of computing before PC's had reasonable device drivers (hardware abstraction).

    In the early days of PC gaming all PC games run in MSDOS.  This was to improve game performance be removing any software that was not specifically required.  The only sound these early PC had was a simple buzzer so a large number of companies developed a spectacular number of sound cards.  There were so many sound cards that PC games could not keep up adding sound card support.  Each game had a setup menu where the sound card was selected along with its I/O memory, IRQ, and other esoteric parameters.  We had to write the HW configuration down on a cheat sheet and each time we upgraded we had to juggle the physical configuration of our PC (with jumpers) so everything ran without conflict.  Eventually, the sound blaster card became the "standard" card and all other vendors either designed their HW to be compatible or wrote their drivers to look just like the sound blaster drivers and achieve compatibility in software.

    Hardware abstraction has the goal of creating a Hardware interface definition that allows the hardware to present the needed capabilities to the running application.  The hardware can have many additional features and capabilities but these are not important to the application so they are not part of the interface.  So abstraction provides a simplification by hiding the stuff the application does not care about.  The simplification comes from focusing just on the features the application does care about.

    So if the silicon vendors are providing these abstractions, life can be only good!... We should look a little more closely.

    Silicon is pretty cheap to make but expensive to design.  So each micro controller has a large number of features on each peripheral in the hopes that it will find a home in a large number of applications.  Many of these features are mutually exclusive such as synchronous vs asynchronous mode in the EUSART on a PIC micro controller.  These features are all well documented in the data sheets but at some point it was decided across the entire industry that if there were functions tied to each feature they would be easier to use.  Here is an example from MCC's MSSP driver in SPI mode:

    void SPI2_ClearWriteCollisionStatus(void)
    {
        SSP2CON1bits.WCOL = 0;
    }

    Now it may be useful to have a more readable name for the WCOL flag and perhaps ClearWriteCollisionStatus does make the code easier to use.  The argument is that making this function call will be more intuitive than clearing the WCOL bit.  As you go through many of the HAL layers you find hundreds of examples of very simple functions setting or clearing a few bits.  In a few cases you will find an example where all the functions are working together to create a specific abstraction.  Most cases, you simply find the HW flags hidden behind more verbosely named functions.  Simply renaming the bits is NOT a hardware abstraction.  In fact, if the C compiler does not automatically inline these functions they are simply creating overhead.

    Sadly there is another problem in this mess.  The data sheets are very precisely written documents that accurately describe the features of the hardware.  Typically these datasheets are written with references to registers and bits.  If the vendor provides a comprehensive function interface to the hardware, the data sheet will need to be recreated with function calls and function use examples rather than the bits and registers.

    In my opinion the term HAL (Hardware Abstraction Layer) has been hijacked to represent a function call interface to all the peripheral functions.  What about Board Support Package (BSP)?  Generally the BSP is inserted in the layer cake to provide a place for all the code that enables the vendor demo code to run on the "HAL".  Arguably, the BSP is what the purist would call the HAL.

    Enough of the ranting....How does this topic affect you the hapless developer who is likely using vendor code.

    Silicon Vendors will continue to provide HAL's to interface the hardware, Middleware to provide customers with high function libraries and board support packages to link everything to their proprietary demo boards.  As customers, we can evaluate their offering on their systems but we can expect to write our own BSP's to get the rest of their software running on our final product hardware.

    Software Vendors will continue to provide advanced libraries, RTOS's and other forms of middleware for us to buy to speed our development.  The ability to adapt this software to our systems largely depends upon how well the software vendor defines the expected interfaces that we need to provide.  Sometimes these vendors can be contracted to get their software running on our hardware and and get us going.

    FW engineers will continue to spend a significant part of the the project nudging all these pieces into one cohesive system so we can layer our secret sauce on top.

    One parting comment.

    Software layers are invented to allow large systems to take advantage of the single responsibility principle.  This is great, but if you use too many layers you end up with a new problem called Lasagna code.  If you use too few layers you end up with Spaghetti code.  One day I would love to know why Italian food is used to name two of the big software smells.

    Good Luck

     

  2. Orunmila
    Latest Entry

    By Orunmila,

    Melvin Conway quipped the phrase back in 1967 that "organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."

    Over the decades this old adage has proven to be quite accurate and this has become known as "Conway's Law". Researchers from MIT and Harvard have since shown that there is strong evidence for this correllation, they called it the "The Mirroring Hypothesis". 

    When you read "The mythical man month" by Fred Brooks we see that we already knew back in the seventies that there is no silver bullet when it comes to Software Engineering, and that the reason for this is essentially the complexity of software and how we deal with it. It turns out that adding more people to a software project increases the number of people we need to communicate with and the number of people who need to understand it. When we just make one big team where everyone has to communicate with everyone the code tends to reflect this structure. As we can see the more people we add into a team the more the structure quickly starts to resemble something we all know all too well!

    image.pngimage.pngclipart226770.png

    When we follow the age old technique of divide and concquer, making small Agile teams that each work on a part of the code which is their single responsibility, it turns out that we end up getting encapsulation and modularity with dependencies managed between the modules.

    No wonder the world is embracing agile everywhere nowadays!

    You can of course do your own research on this, here are some org carts of some well known companies out there you can use to check the hypothesis for yourself!

    image.png

     

×
×
  • Create New...