global return code and log libraries

All my articles in the bugfree_robot project until now were dedicated to detailing Test Driven Development, its setup and basic steps. I’ve described all those steps in a slow pace and very detailed manner. At this moment the project base is ready as well as simpler test features that will reflect the majority of the tests in many embedded projects.

The goal from now on is developing many libraries for the firmware while defining an architecture that will connect all those low and high level libraries. The concepts already shown about setting up test files and inserting them in the Makefile won’t be written in the articles from now on but will always be available in the commits of course. Any new important coding and test technique used in the code will be presented and described in the article.

So with all this out, it is time to move on with the firmware. This step is about coding two new libraries. One of them is nothing more than a header file actually, but it is an important one. They are:

  • return codes library
  • log utility library
Continue reading...

setup build system at Sublime Text Editor

I’m used to coding in Sublime Text 3 while developing code for the tests and the libraries for my projects. It’s a software that provides many smart shortcuts that can get a lot done faster when you get used to them. I know many ones advocate against any kind of visual software for coding and will go hands tied with vim until death but I’ve never gotten there. It may be a good tool after the “getting used to it” period but it never got me into really trying to acquire a relationship. By the way, for those able to read in portuguese, a friend of mine has shared a guide with many pointers in setting up an efficient vim working environment, check it out here.

My standard working setup has a Sublime window open showing the full repository folder tree in the left plus two vertical files open plus a terminal window (in another monitor preferentially) in the test folder so a simple make command will build and run my tests. This works pretty well and causes me no problems but one sometimes: building the tests after modifying a few files on sublime but forgetting to save one of them, getting test results that were not expected and taking a few minutes to realize this simple problem. This problem was fixed as a side effect when I’ve decided to setup a build system inside Sublime to run my tests.

Continue reading...

setting up automatic tests to pinpoint your errors

It is usual to keep building and running the tests in the workstation along the development. This is a main part of the TDD cycle as explained here and shown in this article. Discipline is required in order to never forget running the tests after making changes although every developer take a leap of faith that simple fix that will never go wrong. Those are the risky moments where we change a small thing, go directly to committing/pushing it the Git repository and realizing later that some test is broken.

This is one of the reasons to setup a test build service in the Git repository that will run the tests every commit that is pushed to it. After the tests are built there will be a marker in each commit showing if the tests passed or not and allowing opening the build/run log messages of that test execution.

Continue reading...

Upgrading Ender 3 controller board

Apart from developing embedded firmware and hardware I’ve been spending many hours improving and printing with an Ender 3 3D printer bought last year. This low cost printer is able to deliver awesome results fed by patience, dedication to dig through YouTube tutorials and forum posts and installing some mechanical and electronics improvements. Many failed prints will be in the process too.

Getting my printer to work as good as it can is an ongoing project in the last months since it arrived from China, starting by its assembly. I’ve executed many changes and improvements on it is this process and by naming a few of them:

  • Printed a back cover for the LCD console
  • Added a fan guard to the electronics fan (avoiding filament pieces to get through it)
  • Replaced original PTFE bowden tube with an original Capricorn
  • Added a few LEDs to light its printing bed
  • Replaced original print cooling fan and fan duct to allow better view of the nozzle as it prints and improve air flow
  • Updated Marlin firmware in the original Creality Melzi PCB controller to get rid of known fixed bugs and add new resources
  • Added manual mesh bed leveling to firmware
Continue reading...

blinking fake LEDs in the computer

The last step ended with the IO library source/header files created and the STM32Cube project created and able to cross compile for the microcontroller, although executing nothing but some peripheral initializations. The next step is adding some lines in the test Makefile telling it to compile the IO library and find some STM32 headers. This will allow the development of a test for this library and the HAL GPIO mock library.

Continue reading...

creating the firmware project

Now that TDD and the hardware got explained we are allowed to start creating the firmware project and the first library. It couldn’t go other way than a library that will allow to blink LEDs, the traditional hello world for embedded firmware. It may sound crazy but no LEDs will be required for this tests neither any microcontroller.

The mock library will be included to simulate the calls to the HAL (Hardware Abstraction Layer) that usually interacts with the microcontroller peripherals. This is a powerful resource that allows building, executing and testing functions that are in the lower levels of the firmware architecture and wouldn’t run outside the target device.

The Cpputest repository comes with the Cppumock framework and provides special configurations to inform which source codes are included just as mocks for other source codes. The mock’s function is to provide a way to monitor how the code under test is interacting with other libraries, capturing information about which functions got called, which parameters were provided to them and choose the return values for each call.

Continue reading...

a robot with a purpose

I’ve already spent so many sequential articles talking about TDD, Cpputest and codes that can run for applications anywhere. It is time to take a break on that and show up some details about the hardware that this firmware is being developed to.

The objective of this firmware is (at first) to run in a line follower robot. This kind of robot is intended run autonomously in a track looking for completing it as fast as possible in competitions. The track is usually drawn with a white tape over a black rubber mat glued to a wood plain surface. It is composed by straight lines and some curves and each competition lists some rules about the minimum straight lengths, curve radius, line width and crossings.

The tracks use to provide indications at start and end of the curves and start/end of the track. These marks can be used to help the robot dividing the track in sections and optimizing its behavior to get faster in each section. A common strategy is accelerating to high speeds in long straight sections and braking before the next curve to avoid losing it. Encoders are recommended to allow distance measurements and the latter strategy.

Continue reading...

a bit more about makefiles

The Makefile created for the project in the last article is simple but the $(CPPUTEST_HOME)/build/MakefileWorker.mk included at the end provides a lot of useful tools. They can help checking how the tests are built and executed. These tools are called as parameters to the make command on terminal and will trigger specific actions detailed on the Makefile that often won’t build and run the tests but execute something else.

The available options are listed typing make in the terminal and pressing TAB twice. This will trigger the auto-complete command in you terminal and list some options. For me it shows these ones:

$ make 
all           clean         format        objs/         start         test_runner   
all_no_tests  debug         gcov          realclean     test          vtest         
check_paths   flags         lib/          run           test-deps 

I will describe how to and when to use some of those.

Continue reading...

create makefile and first test

The next step on this process is creating a Makefile and the first test source code. The Makefile is something that many developers never took some time to learn about and create, myself included until some time ago. It is a really powerful tool that I even dare to describe as: limitless.

While building our test project it will execute these functions:

  • define build flags and tools
  • define source codes and includes
  • compile all source files in the project
  • compile all the test sources
  • link all the files following its includes and dependencies
  • link the Cpputest binary and libraries to built test files
  • create the final test application
  • execute the test application
  • print the test result
Continue reading...

setup Cpputest framework

The last post showed an example of coding and running tests with Cpputest, without revealing the ugly part of putting it to work. It is now time to reveal those secrets by explaining how to set a project with Cpputest, code and run the tests. Open you terminal and get ready to run commands on it!

The way I’m used to build projects with TDD is including the Cpputest inside the project directory, it helps on setting the build environment and replicating it consistently along the development team. I recommend it even for one man projects so it works the same way in both situations.

I strongly recommend creating a repository on Github or Gitlab to host you project. It will allow a future step of running a continuous integration build server, that will check every commit executed for failures on the tests. More on that in a future post (EDIT from the future: check it here after reading this one).

Continue reading...