Showing posts with label TAing. Show all posts
Showing posts with label TAing. Show all posts

Sunday, December 19, 2010

Planning for the First University-Level Teaching Position


“She made herself a wall and told the kids to overcome it. Students who do not hit that wall are inferior and weak to failure. Nowadays the kids do not take instructors seriously anymore because the instructors do not try to build that wall. […] They will be quick to forget me but they will never forget Akutsu-Sensei.”
(Namiki-Sensei about Akutsu-Sensei, じょおうのきょうしつ - The Queens Classroom EP11)

Fortune smiled at me this term. A key professor took a parental leave from the ECE department and they were looking for a new instructor. This course is relatively tough to teach. I suspect because none of the faculty was willing to jump in and previous sessional instructors were also unavailable, they actually considered getting in some foreign aid. Having assisted this course before, I was taken into consideration for the interview process.

The interview process was straightforward. It was a series of interviews focusing on the overall capabilities of an instructor. First they want to see you in person and assess that you are not a moron. Self presentation-, listening-, and observation skills get you through that. The next thing they focus on is teaching quality assurance. In my case the interview was targeted at a specific course. You should know what your course will be about and have a detailed plan how you would organize the lecture. That includes not only the content of the lecture but also managerial considerations as to where to allocate TAs and how many TAs you get / are expected to work with. Finally someone will be invited to assess your background knowledge about specific topics of the course. This one could eventually be your biggest thread. You should now your stuff well. Some interviewers might drop you like a hot dish: “How can this guy dare to teach , if he doesn’t even know ”. Other interviewers might be leaner and still think you are smart enough to prepare well for your assignment.
If you pass this stage they will ask you to perform a demo lecture about a sub-section of the course. Mine went well and I was notified that I got accepted.

So now you got the job and your schedule for the next term will be shot. I expect to have little time for anything else than teaching this term. Trust me, you should not do this for money as you will end up putting far more hours in it than “expected”. If you are really short on cash consider Teaching Assistantships. If you screw up as instructor, you not only have an angry mob of undergrads after you, but you also risk your reputation in the department. Since academic communities are usually small, next time you apply for a faculty or instructor position that will be considered. An angry mob of undergraduates cannot really hurt you as long as you obey university policies and master some martial arts skills ;), but they can be really annoying. To have your peace of mind and be able to look back on an effective term those issues should, however, be avoided in the first place.

The first thing I worried about is, what resources and additional man power I would be getting. The next thing is to get in touch who-ever was instructing and or supporting the course before. The more old material you can get, the less stuff you have to prepare next term. In my case I was very lucky to connect with the previous instructor and lab instructor of the course. Most of the materials were obtained, slides, past exams, quizzes, and lab materials.

Engineering the Wall

•••••••••••••••▼The Wall▼•••••••••••••••
┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬
┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴
┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬
┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴┬┴
(by some unknown very geeky ASCII artist)
Since the students have to metaphorically overcome your wall, you need to engineer it in the first place. The design depends on your resources (strength), the students’ experiences (height) and their cultural background (slope & approach) and your objectives.
Most of the findings in this section are a team effort and arose from discussion and high commitment from lab instructor, previous course instructors and TAs.

Thoughts on Wall Strength
I was in the lucky opportunity to get three excellent TAs and an experienced lab instructor. We have three lab sessions of three hours each every other week and weekly tutorials. With the lab you can get the students a deep dive on operating systems and acquiring practical development skills.

Thoughts on Wall Height
Traditionally, and in our case, we let them develop a simple embedded operating system from scratch. Given that they only have less than four months to complete the assignment is (expectedly) challenging. They are software engineers and come with some background in embedded assembly programming and C/C++ programming. Since they are software engineers, I also have no reservations exposing them to the full breath and depth of the operating systems subject.

Learning operating systems is not a mentally challenging task. The algorithms are relatively simple (scheduling, I/O interaction, concurrency) and easy to grasp. Most of the technical material is closely tied to the advances in the hardware community and ends up a lot to learn but is not really mentally challenging.

What is, however, quite challenging is putting the learned material to use in the lab. Therefore, I the course was traditionally worked to make the students make focus a large part of their effort on the project. Given the short lab timeframe, only the basics are covered in the lab.

Passing the lab will not let the students pass the final exam and just learning for the final exam will hardly make them pass the course. Lately the department enforced a minimum 50% weight on the final exam. If it was according to me, I would put much more weight on the project because these skills will certainly pay most of the dividends in their later career. In the past, instructors tried to combine these two areas, to have them evaluate code samples in quizzes/exams as well, which generally dragged the exam average down.

Thoughts on Wall Slope and Approach
As I mentioned before designing this part is the most challenging and depends on your objectives, the environment, and also the cultural background of your students.

Let’s start with the cultural background (or better university culture). Going to a traditional German university for my undergraduate and graduates study (well I fought at the European Council to have my Diploma recognized as MSc…long story), you come from a very different end of university culture than you see in North America. Nowadays the system is changing in Germany, but when I was doing my degree, the five year study were divided into two parts. A section of four basic semesters (i.e., there were only two terms per year) and a section of four advanced semesters. At my time the tuition was bogus, I paid like 70 EUR per term, and the study was largely government financed. As a result the student had no economic stake in pursuing the study and “the system” had great interest in letting unfit students fail early. Therefore the students would usually be sent through hell and high waters during the first four terms to extinct anyone who is not fit or does not commit to their studies. Operating systems would potentially fall into this primary study. After these four terms all course marks would be merged to establish a mid-examination mark. During my years it was not unusual to barely pass this with about 4 out of 5 where 1 is best (that would account for about~55% - 60%, on a grading scheme that is usually tougher than Canada). After that your program grades are reset and you pursue advanced studies that are usually comprised of several electives. During the advanced studies, exams are usually carried out orally, which have their own flavour but are, from my perspective, easier. To have an idea about the practical implications: my course of studies started with about 50 students. After the mid-examination we were about 12, who barely made it. As of today, I know about 6 who eventually graduated.
Many of them now work as research engineers in German automotive companies, pursued PhDs in Germany (actually all of those guys finished already… time to hurry up for me), or entered the middle or higher management of software companies like SAP.

The courses at my school usually consist of a lecture and occasional labs or tutorials. The courses spanned sometimes multiple terms before you had an exam and were back loaded. That means there was only one single final exam, which you would only be allowed to write after you finished all terms of the course. In my case, I had to wait one year to be able to write my math exam. Exams usually spanned multiple hours. This math exam for instance took five hours. Labs, tutorials and assignments were usually not marked; there were a few exceptions for practical courses like Embedded Systems, which usually occurred at the advanced level. All that counted was the final exam. Procrastination or trying to learn everything 2 weeks before the exam was in almost all cases fatal. Most lecturers were also following the traditional Latin meaning of “study” = “studere” in German: “nach etwas streben, sich um etwas bemühen”: To pursue something yourself; to make an effort yourself. Therefore, it was not usual to ask for problems in the exam that were not covered in detail in the lecture or tutorial. Lecture and tutorial were used to “illustrate” the material not necessarily to cover it in enough depth for the exam. The student was expected to read the course text, optional materials and develop the necessary skills to survive the exam. Only then they’d be valued as an Engineer, or otherwise extinct and thrown out of the program early. This property actually provided you with several freedoms that are hard to recognize first. Only a few instructors cared if you attend the lecture, tutorials, or labs, as long as you survive the exam. This could actually buy you plenty of free time, if you were well organized and skilled. Unfortunately, thanks to EU, austerity, and the introduction of student fees this original tough system has lost much of its traditional flavour over the past 5 years.

Then you come to North America, specifically Waterloo, and see a very different picture for undergraduate studies.

First, students here are actually heavily financially invested into their studies and the university runs as business. Therefore, letting them fail early seems to be big taboo here. The university does not seem to want to disgruntle students. Also university evaluations are carried out in a very different manner. If you had a few very bright people graduating from your program in Germany, you have many people graduating in Canada, with a few being very bright and starting off businesses. The few very bright individuals in both instances would dominate the news and be the selling point of your university. On the other hand, you still have a large mass of students who do not end up becoming CEOs or top-managers in Canada. Out of those, it is quite surprising how many people in Canada actually end up doing things that are totally unrelated to their initial study. It is in so far a surprise to me, because they not only spend a lot of time on it, but they also blew a lot of cash on it. Actually, most of this money comes from student loans, enslaving them for years to debt (another “business model” from the Canadian government).

Second, this is something that I would call North-American culture that has also been analyzed in the (controversial) research community.
These are some ideas that apply to the emergent mass and not necessarily individuals, however, may become emergent for an entire course.
Canadians seem to have the highest level of individualism (IDV), a low level of uncertainty avoidance (UAE), a low long term outlook (LTO) and a low power distance (PDI) compared to other regions.
As a result of LTO and UAE, studies tend to be more applied here and focus on the short-term “market” instead of fundamental research. Also hard-core corporate research labs are found elsewhere than in Canada. Thus, the study is much more applied here and incorporates more labs than in Germany. Therefore, the students also prefer a higher level of practical guidance, instead of leaving them to themselves and punishing them hard if they get it wrong. They also seem to work more ad-hoc here.
That may make them very prone to procrastination (i.e. ignoring the course/lab). They will eventually focus their attention where the pressure/fun comes from (other courses, evening activities …). Other areas may not have this problem. An instructor of mine in Germany, was teaching university-level math courses 20 years ago in Japan… he had trouble setting the standards high enough not to bore the students. Unfortunately, their system also lost much of its flavour over the past 20 years.

Talking to people about this issue there are a couple options to mitigate procrastination and promote continuous participation...

One option is a front-loaded course where most of the efforts are spent on a mid-term examination. You communicate to them that they should focus their attention on the mid-term and that will get burned there regardless (say scores in the high 60s) and an easier final (say scores expected in the high 80s). On the lab side, you also force them to focus their efforts in getting the project design right very early. This was done in the past by having them deliver a comprehensive software design document about half-way through the project. This approach seems to be the favoured option for many instructors here; well some are leaner on the mid-terms and still burn the students on the finals. However, for the actual lesson this does not provide optimal value. At the time the students write the software design document they have little idea what they are going to implement. It was usually communicated to them during the marking, how good or bad their design was and how the real thing should look like. Another issue is, because many instructors follow this model in other courses as well, the students will experience load surges during the term. Here is a good one, try to get hold of a UW undergrad during mid-term periodJ.

Another option is an even workload. Instead of making them write a mid-term, you let them do multiple short quizzes. That way you enforce continuous participation and give them a bit more time during mid-terms. Because the main lab-take-home is acquiring the development skills, we skip the SDD and focus on code deliverables right from the start that are spread out in multiple milestones. Finally, they will include the written design and the lessons learned in a final report. This poses more workload on the TAs and lab instructors in terms of marking, but almost entirely avoids procrastination on the student side and over the long run provides them with more insight into the course material. The risk of course failures is therefore reduced because they are implicitly prepared for the final exam and final course deliverable if they follow through. If they do not, they will eventually feel the pain of low marks right from the start and need to react and presumably catch up with the workload.

We (my lab instructor and me) favour and will use the even workload option, even when this means that I need to share some of the TA workload myself. I want to provide these guys with a high-quality course and ensure that they actually acquire skills in the lab and remember some of the course materials.

Summary
Teaching a course is not going to be a walk in the park. It takes plenty of effort and a good team of TAs and lab instructors. If you teach a course, consider the crowd, their background, their skills, and the established university culture. Be sure to work for the students and not against them. If you are from a different background, noticing such deviations is sometimes hard.
Also thanks to the previous instructors and the current lab instructor for in-depth discussions and input on the subject.


We will see how well that plays out in my case. As Clausewitz once said: “No plan survives the first battle!” … we will see if they tear my wall down J

References
  • Willmanns & Hehl, “Praxis und Paradoxa des Innovationsmanagements” (In German).
    This is a good book for innovation and research management in Germany. After reading it, you notice the imprint that you’re educational and cultural background makes on you and how to deal with it.
  • Hofstede, Cultural Dimensions.
    He quantitatively tries to analyze different cultures. The measures should not be stereotyped on individuals but give a rough idea what to expect, when you travel to a different background. His research is however quite controversial.
  • じょおうのきょうしつ - The Queens Classroom.
    This is a TV series that I came across on a trip to Japan. The teacher has a great interest to prepare the kids for the tough and terribly competitive world in Japan, where bullying and forced subordination are a constant. The methods applied are very traditional and hardly represent what’s actually happening today. It is also exposing how traditional values are now ridiculed over there. Such draconian methods would obviously fail in adult education, specifically in Canada. It’s however a very curious TV series.
  • Pink Floyd – The Wall.
    A song of the late 70s. It deals with the abuse by teachers and the situation of schools in Britain at the time. The wall in this case is however used as a metaphor for isolation and not effort (as in the citation before).
  • Various management adult education books.

Monday, February 8, 2010

On Designing Boot Loaders and Grey-box-Testing Firmware (Part 2/2)

In the past tutorial, we have established how to integrate two pieces of code together, exemplifying a boot-loader and firmware interaction. The difference to the previous scenario is that in a test-suite, you actually need to maintain a symmetric interaction among those two different pieces of code. There are several approaches how this can be achieved.

  • Explicitly pin each data-structure to specific locations in the memory. So each of the code pieces knows where to look for the others code and data.
  • Pin an entry point of the firmware to a specific memory section that registers tests in another shared memory section.
  • Pin an entry point of the firmware to a specific memory section that registers tests in a data-structure provided by the test-suite.

Looking at the different options it becomes apparent, why we talk about “grey-box testing”. In all cases we need to know some memory sections within the other pieces of code. The approaches differ by the number of memory sections required to be pinned. In addition, you might want to load tests dynamically through the boot loader requiring additional pinned sections.

Step One: How to Customize the Locations of Code and Data

In principle you want to place individual data structures and code into labelled memory sections, defining the location and label in the linker script and referencing label in GCC for the linker. The attribute-section paradigm allows us to do so. Suppose we want to load a function called test into a block of memory at an absolute address. First, we need to define this memory section in the linker script as follows:

MEMORY
{
ram : ORIGIN = 0x10200000, LENGTH = 1M
}

SECTIONS
{
.text :
{
*(.text)
*(.rodata*)
} > ram

.data :
{
*(.data)
} > ram

.bss :
{
*(.bss)
} > ram

__TESTS__ 0x10300000:
{
*(__TESTS__)
}
}

Second, we need to reference this section in the code. This is done by declaring an attribute in the function’s specification as follows:

void __attribute__ ((section ("__TEST_INIT__"))) init_tests() {
...
}

The listing shows that the function is indeed stored at that particular location.

Sections:
Idx Name Size VMA LMA File off Algn
...
3 __TEST_INIT__ 00000030 10400000 10400000 00006000 2**1
...
Disassembly of section __TEST_INIT__:

10400000 :
...

Likewise global data structures and variables can be stored at such particular locations.



typedef struct {
void (*test1_fun)();
} TestFixture;

TestFixture __attribute__ ((section("__TESTS__"))) tests;

Step Two: Designing the Tests

We proposed three options for the test integration in the introduction.

Option One: (Naïve) Tell everything

The first naïve approach is to actually pin each testable primitive and global data structure to a particular memory region. In this scenario, all entry points to these primitives and global data structures are declared in the linker script of the test code. The linker script of the firmware declares all of those sections and the primitives are pinned to these sections using the attribute-section paradigm. This approach reduces the overhead of implementing tests vastly, since all locations are defined and no futher registration of the firmware with the tests is needed. Expected results can be directly checked against the data structures. However, maintaining the linker scripts in-sync, handling fragmentation of the firmware code (i.e., huge gaps between the declared sections) and changing the firmware code incur at significant overhead.

Option Two: Let the Firmware Register with a Global Data Structure

This approach is geared to minimize the overhead of maintaining the memory locations. In this scenario the firmware voluntarily registers with the test code, placing the information into a shared data-structure. In this scenario, two locations need to be shared across the firmware and the test-code.

  • The location of the registration routine that is to be implemented by the firmware code.
  • The location of the global data structure that contains the test information.

In addition the specification of the test data structure as well as the registration interface need to be shared among the two pieces of code. This can be achieved by sharing a common header file.

This scenario is useful when the test information is known beforehand. It also enables testing slightly modified versions of the firmware because the test code does not need to be aware of the location of the primitives or data structures. The firmware voluntarily provides this information through the registration routine. Both linker scripts define the section of the global data structure. Both linker scripts define the sections of the global data structure and registration routine. In order to avoid adverse effects the test code may prevent explicit writing to the registration routine. Here the test linker script:

MEMORY
{
sram : ORIGIN = 0x10200000, LENGTH = 1M
}

__FIRMWARE_ = 0x10100000;

/* firmware's test registration routine */
__REGISTER_TEST__ = 0x10300000;

SECTIONS
{
...
/* shared test data goes here */
__TEST_DATA__ 0x10400000:
{
*(__TEST_DATA__)
}
}

And the firmware linker script looks like this:

MEMORY
{
sram : ORIGIN = 0x10100000, LENGTH = 1M
}

SECTIONS
{
...
/* firmware's test registration routine */
__REGISTER_TEST__ 0x10300000:
{
*(__REGISTER_TEST__)
}

/* shared test data goes here */
__TEST_DATA__ 0x10400000:
{
*(__TEST_DATA__)
}
}

The shared header among the firmware and the test code defining the structure of the shared data structure and the registration interface:

typedef struct {
void (*firmware_fun)();
} TestFixture;

TestFixture __attribute__ ((section("__TEST_DATA__"))) gTestFixture;

extern void __REGISTER_TEST__();

The test code inside the bootloader simply registers with the firmware and invokes the required primitives of the firmware:

int main(void)
{
debug_puts("Inside boot-loader test suite!\r\n");

/* register test structure */
__REGISTER_TEST__();
/* execute a firmware function to test */
gTestFixture.firmware_fun();

debug_puts("Inside boot-loader again!\r\n");

return 0;
}

The location of the registration code inside the firmware is pinned as follows:

void  __attribute__ ((section ("__REGISTER_TEST__"))) test_register() {
gTestFixture.firmware_fun = myprimitive;
}

Merging the test suite with the firmware and executing it in the coldfire simulator, as described in Part one of this tutorial, yields the following output. The test code can be obtained from option2.zip (see attachments below).

Use CTRL-C (SIGINT) to cause autovector interrupt 7 (return to monitor)
Loading memory modules...
Loading board configuration...
Opened [/usr/local/coldfire/share/coldfire/cjdesign-5307.board]
Board ID: CJDesign
CPU: 5307 (Motorola Coldfire 5307)
unimplemented instructions: CPUSHL PULSE WDDATA WDEBUG
69 instructions registered
building instruction cache... done.
Memory segments: dram timer0 timer1 uart0(on port 5206)
uart1(on port 5207) sim flash sram

!!! Remember to telnet to the above ports if you want to see any output!
Hard Reset...
Initializing monitor...
Enter 'help' for help.
dBug> dl merged.s19
Downloading S-Record...
Done downloading S-Record.
dBug> go 0x10200000
... telnet on uart0
Inside boot-loader test suite!
Inside firmware primitive!
Inside boot-loader again!

Option Three: Only Register with the Firmware

This option obviates the use of a shared data structure and may be used in the case where the test code has access to enough memory to allocate its own data structures. Usually, the constraints on boot loaders and such testers are relatively low that this is not an option. In this case we only have to share parts of the test data structure and the specification of the registration interface. The only difference to the previous example is that now the registration routine takes a pointer to the test structure provided by the test code. In addition only the prefix of the structure needs to be identical across the two pieces of code. For example the test code may choose to store test results in the structure that are hidden from the firmware. Let’s look at an example. As follows the specification of the test structure for the bootloader. Notice the removal of the pinned global variable and the additional value.

typedef struct {
void (*firmware_fun)();
int someTestingValue;
} TestFixture;

extern void __REGISTER_TEST__(TestFixture *tests);

And here the specification of the test structure for the firmware:

typedef struct {
void (*firmware_fun)();
} TestFixture;

This time the test definition structure is allocated by the bootloader and passed in as parameter to the firmware. The example code is included in option3.zip (see attachments below). The interaction with the simulator is identical to the previous example.

Step Three: Dynamic Tests

In many cases the space constraints for the test code are limited. So flashing an entire precompiled suite of tests may be impossible or undesirable. In order to overcome this issue you may want to consider dynamic tests. In this scenario only individual tests are uploaded through the boot loader and executed against the firmware. This approach can be combined with all of the above methods. In addition to the specified memory sections required by the test procedure (see Step two) you also need to define a section that holds the dynamic code. In the boot loader this section is referenced as array to store and replace the code and as function pointer to execute the test. The following example shows this with an already written array that is stored at the location of the test-code. The example code of the test is shown as follows:



/* dynamically loaded structure */
unsigned char __attribute__ ((section("__TEST_CODE__")))code [] = {
...
};
...
extern int __TEST_CODE__();
extern unsigned char * code;

...

int main(void)
{
debug_puts("Inside boot-loader test suite!\r\n");

/* perform test from loaded array */
__TEST_CODE__();

debug_puts("Inside boot-loader again!\r\n");

return 0;
}

To build the test code, you link it using a script that places the text, data and bss segment at the location of the test-code; or pin the test function explicitly to the section of the test code of the boot loader. The former option is useful, when the tests consist of several subroutine calls. The latter is useful for unit tests consisting of a single function call, having no global data structures. In addition, you might want to consider allocating a separate stack inside the test routine to avoid corruption.

The array containing the test cases can be created from the SREC-S19 file of the compiled test-case. The python script srec_to_c.py included in the sample code performs that conversion for continous S19 files.

Discussion

In this tutorial, we have shown how to leverage the boot-loader-firmware-paradigm introduced in the previous part of this tutorial to perform dynamic firmware testing. It is up to the software engineer to select the degree to which the firmware has to interact with the tests to register with the test-suite.

In addition to the procedures shown, you may want to consider using your host systems timers to check the progress of executed tests. If the firmware does not register with the tests properly or a test becomes stalled, the boot loader can be able to recover itself using a timer interrupt.

A substantial risk using these approaches is that the firmware and the bootloader still share the same address space. You may want to consider introducing explicit checks that ensure that the firmware does not touch boot loader code (i.e., through heap operations) and vice-versa.

The srec_to_c.py script performs the transformation of the test-case’s SREC/S19 files to the array. You can modify this script to create binary images that are uploaded through your devices interface.

Sample Code:

Saturday, February 6, 2010

On Designing Boot Loaders and Grey-box-Testing Firmware (Part 1/2)

I am currently TAing SE350. The students’ deliverable is a small real-time executive kernel (RTX) that runs on a Freescale Coldfire chip. We got the idea of building an automated embedded test suite for the students’ term projects. However, instead of having to compile the students’ code from scratch we would only want to take their firmware binary directly and test it. This testing would involve injecting several test processes in the students’ OS. These tests would stress their implementation and dump the results to a serial port of the actual Coldfire board. Having worked in the embedded field this problem is similar to integrating boot-loaders with firmware. In the project, the boot-loader is the testing code and the actual firmware is the code to be tested.
You end up with two pieces of binary code that will be programmed into your device. So the challenge is to make them talk to each other. In the case of the boot-loader, you have the boot-loader invoking the firmware, and in the case of the testing code the testing code invokes the RTX.
In the following sections, I describe the steps for…
  • Building a tool-chain,
  • Developing the boot-loader-, firmware-code,
  • And integrating the different SREC/S19 files
In the second part I will describe how to leverage the established framework to design a native testing suite.
Step 1: What tools do I need?
In order to run stuff on bare (i.e., no existing OS) chips you need to have a tool chain that translates your source code into ELF files (ELF = Executable and linkable format) and SREC/S19 files for flashing it onto the device. We need:
Step 2: Where to put my firmware code?
If you are going to integrate two pieces of code, you need to make sure they do not overlap in flash and do not access themselves in an undesired fashion. Since you are developing on the bare hardware, you actually have complete control over the former property and can enforce the latter by a careful code design. Your generated ELF file will consist of three major sections, as follows:
Note that the BSS segment exists for some historic reason and in almost all OS lectures it is implied by the data segment (i.e. data := data + BSS). By convention the text segment starts at a lower address than data and BSS segments. When your program is executed the data the values of the data and BSS segments are copied into the main memory. However the size is not established at runtime. The program’s stack for function calls and local variables resides on the heap which is by convention allocated after the BSS segment and grows dynamically. The GNU tool-chain you just build includes the GNU Linker that allows specifying these locations explicitly by linker scripts (i.e., LD files). GNU LD files have a simple structure describing:
  • The memory banks and locations,
  • How to spread your code across those locations.
The following simple example file describes an embedded system (i.e. in my case: CJDesign’s MCF5307 board). Most evaluation boards, like mine, come with a huge SRAM and actually have a ROM that allows you to load stuff in main memory. As such, we will dump all code into SRAM for testing purposes. The following example specifies the assignment 1 MB at address 0x10100000 to SRAM and dumps all sections of the code into that segment. Hint the space after the section names is required to ensure the uniqueness of the names. The actual code will execute from the SRAM start address, which is 0x10100000.

/* firmware.ld */
MEMORY
{
  sram        : ORIGIN = 0x10100000, LENGTH = 1M
}

SECTIONS
{
  .text :
  {
    *(.text)
    *(.rodata*)
  } > sram

  .data :
  {
    *(.data)
  } > sram

  .bss :
  {
    *(.bss)
  } > sram
}
A note for SE350 students: Guys please do not attempt to hack the linker file provided by the course. You may run into serious trouble by using my linker script or hacking the existing one!
Step 3: Building your firmware
To compile and link your source (firmware.c) with this file, use the following command.
m68k-elf-gcc -Tfirmware.ld -Map=firmware.map –o firmware.elf firmware.c
You may want to generate a listing of the file to see that everything is at the expected location, as follows:
m68k-elf-objdump -xdC firmware.bin > firmware.lst
In order to flash or deliver this file to the customer we actually need to convert it into the Motorola S19/SREC file as follows.
m68k-elf-objcopy --output-format=srec firmware.bin firmware.s19
Step 4: Building the other piece of code
What’s left to build is the boot-loader. In order to ensure distinct flash and memory regions you need to provide another linker script that puts all the boot-loader code into a different location than the other code. A wise choice is to put this code as far away from the actual firmware as possible, possibly at the end of the available memory. The following code offsets the memory bank by 1MB and dumps the code there.
/* bootloader.ld */
MEMORY
{
  sram        : ORIGIN = 0x10200000, LENGTH = 1M
}

__FIRMWARE__ = 0x10100000;

SECTIONS
{
  .text :
  {
    *(.text)
    *(.rodata*)
  } > sram

  .data :
  {
    *(.data)
  } > sram

  .bss :
  {
    *(.bss)
  } > sram
}
In order to invoke the firmware we need to put a symbol inside the linker script that identifies the expected starting address of the firmware, which is in this case called firmware. This symbol can be used from the C-code directly as function call. In order to avoid any compilation warnings you should forward declare this function as external. The compilation and transformation into the S19 file is analog to creating the firmware code. You should end up with a bootloader.s19.
Step 5: Throwing things together
In practice when you build an embedded device. It should have the boot-loader and some firmware programmed in when it leaves assembly. In many cases the interface that the end-user has to the device (i.e. a USB connector) is different from what you have during assembly (e.g., an in-system flash tool). As such it is necessary to throw the boot-loader and the firmware together.
The S19 format is a simple ASCII data exchange format, originally developed by Motorola, for executable code. It is widely accepted by most programmers for Motorola-based embedded systems. The files are processed line by line; each line contains a control code, a record size, an address, an optional data sequence and a checksum. You find the details here.
GNU Object Copy usually outputs:
  • A block header (S0),
  • A sequence of data records (S1-S3)
  • And the start-address (S5-S9).
The block header usually contains the name of the file (e.g., firmware.s19 or bootloader.s19). Most ROM loaders on evaluation boards will actually processes start address record, which is in our case the declared origin of the SRAM, and fail to load if they do not find it, so it needs to be included.
So in order to put the boot-loader and the firmware together into one file you need to provide one header, the data of both programs and one starting address:
  • Header: any of the firmware/boot-loader, or a custom header (see below)
  • Data: concatenate the data records of both original programs
  • Starting address: the start address of the boot-loader
Step 5a: Composing your own header
Yes, geeky people like me actually like implementing checksum algorithms and brand their creation. In order to do so we need to dive into the checksum procedure used by S-records. According to Wikipedia the checksum is “[…]the least significant byte of ones' complement of the sum of the values represented by the two hex digit pairs for the byte count, address and data fields.” So guys, its time to dig out those algorithm-class-notes and figure that out, … oh wait …, found it:

  • Sum up all bytes starting from the byte count record.
  • Set: checksum = 0xFF – (0x00FF & sum)

Why the hell would anyone use such a check-summing algorithm? The answer is simple: It can be easily checked! While processing the S19 records, you can actually sum everything up, including the provided checksum and should get 0xFF. That is a simple compare operation and that can be evaluated in no time.
Step 6: Testing your Creation
If you build the Coldfire simulator according to my instructions you can invoke the simulator as follows…
coldfire --board cjdesign-5307.board
and load the code like this…
Use CTRL-C (SIGINT) to cause autovector interrupt 7 (return to monitor)
Loading memory modules...
Loading board configuration...
        Opened [/usr/local/coldfire/share/coldfire/cjdesign-5307.board]
Board ID: CJDesign
CPU: 5307 (Motorola Coldfire 5307)
        unimplemented instructions: CPUSHL PULSE WDDATA WDEBUG
        69 instructions registered
        building instruction cache... done.

Memory segments: dram  timer0  timer1  uart0(on port 5206)
                 uart1(on port 5207)  sim  flash  sram

!!! Remember to telnet to the above ports if you want to see any output!

Hard Reset...
Initializing monitor...
Enter 'help' for help.
dBug> dl merged.s19
Downloading S-Record...
Done downloading S-Record.

dBug> go 0x10100000 <- the actual firmware
... some garbage, because RTS returns to nowhere...
dBug> go 0x10200000 <- the boot-loader invoking the firmware
You should yield the following output on the terminal (telnet localhost 5206).
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

uart0
Inside firmware! <- 1st go of the firmware
Inside boot-loader! <- 2nd go inside firmware!
Inside firmware!
Back in boot-loader!
Discussion
In this part of the how-to I explained the basics of building two pieces of binary Coldfire code and integrating them into a single file that can be processed by most programmers and ROM-loaders. A popular application is the integration of boot-loader and firmware code for embedded system assembly. Another application is embedded grey box testing. In this technique, instead of a boot-loader a test-suite is evaluated against the firmware to check for potential defects. In the next post, I’ll describe how to design such a test framework.
You can find the sample code of this post here. The code will contain some modified linker scripts that deal with particular alignment problems of the simulator. Furthermore, the boot-loader and the firmware should have different stacks so some assembly files have been added to do so. The S19 merging is done by the python script merge.py.
References and Sample Code