BMOW title
Floppy Emu banner

Porting Linux to a Homemade CPU

Wouldn’t it be cool to create a homebrew CPU architecture, then port Linux to it? I realize that single sentence probably implies a couple years’ worth of work, but that’s OK. A real, full-fledged OS running on a totally custom piece of hardware would be awesome! I’ve already designed and built homebrew CPUs several times before, though only one had anything that could charitably be called an “operating system”, and none of them were anywhere capable enough to run something as complex as Linux. What kind of minimum specs would a homebrew processor need to be capable of running a bare-bones text mode Linux? Core speed? Number and size of internal registers? Supervisor/user mode? MMU? How much RAM? What else?

There’s so much I don’t know, I don’t even know what I don’t know. That’s what makes it exciting. I’m perfectly comfortable as a Linux /user/, with a variety of random Unix-world experience going back 20 years to Sun Sparcstations. But I know next to nothing about how Linux (or other Unix flavors) are put together underneath – what exactly is the bootloader, the kernel, what is init, and all these other fun terms I hear tossed around. How do you go about porting Linux to a new board, a new CPU architecture?

Based on 2 minutes of exhaustive research, I’m looking at ucLinux as a starting point, rather than any regular desktop Linux. ucLinux is aimed at very low-end CPUs with no MMU, and “low end” will doubtless describe whatever I end up building. The related ucSimm module is a 16 MHz Dragonball CPU with 2MB of ROM and 8MB of RAM, so that already gives me an idea of where I need to aim for CPU specs. If I can get away with less, great. Bill Buzbee’s homemade Magic-1 runs Minix on a 4MHz custom 16-bit CPU with 4MB RAM.

To be candid, I’m a lot less excited about designing a 4th homemade CPU architecture than I am about porting an operating system to it. So as a starting point (and maybe a finishing point), it might make more sense to try porting Linux to an existing but obscure CPU or development board. That would get me familiar with the details and requirements of the porting process, which would help inform what features would be most valuable when I design my own CPU/board.

Assuming I do go forward with a homebrew CPU (or maybe try to augment the BMOW1 CPU), I’m still unclear where to even begin with porting Linux to it. The Linux kernel is available as source code, so presumably one of the first steps would be to modify and compile this code for the new CPU. That implies using a cross-compiler that runs on an x86 PC but outputs NewCPU code. So how do I make such a cross-compiler?

A few years back, I took a brief look at adding support for a new CPU architecture to gcc, but quickly gave up. It looked complicated and confusing, and I really had no idea what I was doing. Unfortunately, that’s probably where I need to start, since I’ll probably need a cross-compiling gcc to compile the kernel sources. Alternatively I could compile gcc from its own source code, using a different compiler, but then where would I get that other compiler? I had some limited success porting the Small C compiler to BMOW a few years ago, so perhaps I could start there, but that compiler was so limited and simple that it’s probably useless. Given that, it seems a reasonable place to start would be deeper investigation into what’s required for extending gcc to support a new and novel type of CPU. Whee!

Read 9 comments and join the conversation 

9 Comments so far

  1. David - October 23rd, 2014 11:22 pm

    Yes, a cross compiler targeting your CPU would certainly be necessary. But gcc isn’t the only choice for building the Linux kernel these days: LLVM is almost there (some patches are still needed on both sides – http://llvm.linuxfoundation.org/ ). And it has a much less crufty codebase, so you might want to take a look.

  2. Peter Lund - October 24th, 2014 3:10 am

    Another option would be to make a cross-assembling assembler that replaces gas. Make gcc output code for a simple CPU, say one of the FPGA RISC soft-CPUs, and let your cross-assembler translate that to code for /your/ simple CPU. You should be able to get up and running with a very simple x-assembler that basically does macro substitution — and then later add peephole optimizations or basic blocks, register (re)allocation, and tree matching-based code generation. You might even be able to tweak gcc relatively easily to believe that the target CPU has way more registers than it really has so you don’t get any spill/fill code for the virtual target — you can then make your x-assembler generate it as needed for your real target.

  3. Steve Chamberlin - October 24th, 2014 8:07 am

    I’m probably in over my head already, ha! David, I was reading about LLVM yesterday and it does look promising, although I haven’t peeked at its code yet. Any idea how complicated the required patching is for using it to compile the Linux kernel? I guess it would just be one more challenge on top of a pile of others.

    Peter, I like the idea of a cross-assembler, since it’s something I can wrap my head around more easily. I’m not entirely clear how it would work, but I’m sure there’s some decent reading material I can find about it. How would it handle things like the real target CPU having more registers than my simple CPU? Some instruction referencing R11 can’t be simply translated if there is no R11. Are you thinking that a macro would substitute code for a synthetic R11 at a hard-coded memory address, or maybe on the stack or someplace similar?

  4. Peter Lund - October 24th, 2014 10:43 am

    Yes, that’s what you would do: just use a memory address for those cases. But who says your CPU should have fewer registers?

    At least, that’s how you would do it at first. Later on, you may decide that it’s wasteful so you do your own register assignment and your own spill/fill code. In that case, the compiler will already have generated spill/fill code for the CPU it believes it is targeting. That code is not really being productive anymore so you want to get rid of it. You can either detect it and remove it or you can try to prevent it from being generated. Lying to gcc about the number of registers on the target architecture may be a reasonably easy way of doing that. It may also be a reasonably easy way of avoiding having a synthetic R11 at a hard-coded memory address.

    The Dragon Book is a good place to start — the old ones are definitely good, the new one is apparently also good. Another good book to read would be the Tiger Book. Get the one that uses ML — the code in the others is just ML badly disguised as Java or C.

    Oh, one more thing: your register allocation algorithm doesn’t have to be fancy. Linear scan works surprisingly well.

    This is btw how DEC ported VMS from VAX to the Alpha!

  5. Steve Chamberlin - October 24th, 2014 11:30 am

    The Dragon Book – that takes me back! I never did take a compilers class in university, but I can remember my classmates who did take it carrying that book. I got a copy when I was working on BMOW, but found it hard to penetrate. Most of it seemed to be about compiler front end stuff like syntax parsing, rather than register assignment or optimization or code generation, which is what I mostly needed to know for retargeting an existing compiler to a new machine. I ended up giving the book away to someone else. I’m not familiar with the Tiger Book. I did find a nice document on how to write a new back end for LLVM: http://llvm.org/docs/WritingAnLLVMBackend.html So I’ll probably start there.

  6. Peter Lund - October 24th, 2014 1:01 pm

    No, it is also about type checking, block structures, lexical vs unlexical scoping, (extended) basic blocks, control flow graphs, register allocation, intermediate code, peephole optimization, code generation, strength reduction, implicit/explicit casts, etc, etc, etc.

    Denmark has very good public libraries with inter-library loans so I was able to discover the first edition (with a slightly different title) amongst the many other books I read to try to get a handle on computer science and programming. This was one of the Really Good books so I bought the next edition (i.e. the first edition with the new title) a bit later. I think I was still in Gymnasiet at that point.

    If you find it too intimidating or “theoretical” (which I promise you it actually isn’t), you could do a lot worse than going through Jack Crenshaw’s “Let’s Build a Compiler” series.

    The Tiger book will please you. So will ML (possibly after a getting-used-to-it period of reshaping your brain).

    But anyway, the point of my suggested cross-assembler solution is that you can get away with very little compiler knowledge and still get something that works. It will be inefficient, but that’s okay, as long as there is an incremental path towards better efficiency and more realistic (normal) code generation — which there is. You can literally start with a dumb Perl script that reads a line at a time, cleans it up a bit, and then spits out one or more lines of mostly fixed text but with a few variable interpolations here and there.

    gcc is a “simple” wrapper around a bunch of other programs that do the real work, gas being one of them. What those programs are called (and what their full path is) and what extra secret parameters to give them is decided by a “spec file”. You can override it with ‘gcc -specs=’. So you just write your own spec file that invokes your cross assembler instead of gas.

    That doesn’t get you all the way through to a binary but I suppose you already have an assembler and a linker for your architecture (or expect to write them). It should at least get a large part of the Linux/compiler porting job done.

    ELF is the normal object file format for the kernel and for gcc — it is a surprisingly nice and flexible format but the spec is perhaps larger than you’d like.

    If you want to learn more about linkers, loaders, and object file formats before you dive into writing your own linker then I highly recommend John R. Levine’s “Linkers and Loaders”.

  7. Lawrence - October 26th, 2014 6:46 pm

    I once heard of a project (http://dmitry.gr/index.php?r=05.Projects&proj=07.%20Linux%20on%208bit) where a person got Linux running on an AVR microcontroller by emulating an ARM processor on it and running Linux on that. Theoretically, you could use a similar design- building a custom CPU architecture and writing an ARM emulator for it, and then installing any distro with an ARM build onto it. The implementation took about 4 hours to boot and about 1 minute to execute any commands on the terminal, but it worked.

  8. Bill Maddox - March 25th, 2016 4:39 pm

    I know this thread is ancient, but perhaps you are still interested in this project. Fabrice Bellard has written a small, relatively simple C compiler that can compile the Linux kernel. This demonstration was done some time ago, so you may need to use a rather old kernel.
    See TCC at http://bellard.org/tcc/ .

  9. nz azhari - June 20th, 2022 6:05 am

    Hi Steve. I accidentally run into this site while looking for free cross assemblers to go with my own cpu which is already running and communicable via UART. This cpu is an alternative answer to licensing issues of other cpus such as x86, ARM & softcores such as Microblaze, NIOS. RISC-V is too complicated for common people. The cpu is self-documenting as it was created in schematic. Most important it is as free as the air with no limitations to derivative works. It has many unique features not present on other cpus. If you are interested to produce a cross-assembler for this, please contact me.

Leave a reply. For customer support issues, please use the Customer Support link instead of writing comments.