From zero to RISC-V in hardware, in 6 minutes

2024/10/05

From zero to RISC-V in hardware, in 6 minutes

Program your FPGA with a one-liner command. It’s a kind of magic.

This is a project in sustainable FPGA development that I have been working on. It ties together several smaller projects that I developed in a perpetual quest for hermetic, ephemeral, and reproducible builds. As far as I know, you can not find any setup like this anywhere else. It is a long and non-eventual video, but hardware people should be able to recognize the appeal.

Words are cheap: check out the video first, and I will explain the details later. If you are sufficiently familiar with the material, you will recognize the value right away and you may not need to read any further.

What you may note in the video is as follows:

This RISC-V 32 setup is not the final setup.

I needed a controller to execute programmable hardware tests for an as-of-yet unpublished project. (Which I hope to be able to publicize, but we’ll see about that.)

But I only have an FPGA device, not a Zynq, so I couldn’t rely on any hard IP on the device. So I created a testbench that enables me to continue my work. That testbench is what you see in the video.

Why?

From times past, when “do you use source control” was a reasonable question to ask, comes the venerable blog Joel on Software. Once upon a time, Joel formulated 12 questions for judging the quality of your development team quickly. These questions have since become known as “The Joel Test”.

Its “question 2” is relevant here, and it tells the story way more eloquently than I ever could:

2. Can you make a build in one step?
By this I mean: how many steps does it take to make a shipping build from the latest source snapshot? On good teams, there’s a single script you can run that does a full checkout from scratch, rebuilds every line of code, makes the EXEs, in all their various versions, languages, and #ifdef combinations, creates the installation package, and creates the final media — CDROM layout, download website, whatever.

If the process takes any more than one step, it is prone to errors. And when you get closer to shipping, you want to have a very fast cycle of fixing the “last” bug, making the final EXEs, etc. If it takes 20 steps to compile the code, run the installation builder, etc., you’re going to go crazy and you’re going to make silly mistakes.

For this very reason, the last company I worked at switched from WISE to InstallShield: we required that the installation process be able to run, from a script, automatically, overnight, using the NT scheduler, and WISE couldn’t run from the scheduler overnight, so we threw it out. (The kind folks at WISE assure me that their latest version does support nightly builds.)

What?

Programmable hardware buffs among you will remember how hard it is to create a programmable hardware workbench for FPGAs from scratch.

The typical workflow goes something like this:

What if things could just magically happen? Is such a thing even possible?

Yes

Why yes, yes it is. The video linked above shows you a FPGA programming workflow I developed, using nothing but either open source or freely available tooling.

The video shows me programming the Alinx A200T FPGA device, based on AMD (fka Xilinx) Artix-7 XC7A200T device, starting from a setup that contains no mission-specific software preinstalled.

The entire process is started and completed with a single command only.

And the board being programmed was hundreds of miles away from the build machine.

Obvious parts

That single command does a number of important things that happen automatically for you. There is no fiddling with any finicky piece of software. We started from scratch, on a machine that only has bazel installed.

No workflow that I am aware of today can compare.

Less obvious parts

Remote build

You may not have noticed this.

The entire build process, including synthesis, happens on a virtual machine in Google Cloud Platform (GCP). I do this because I can size the virtual machine based on the requirements of the vendor software I must use. If I need more power, I can configure my virtual machine with more power, and continue. If I need more than a single machine, I can repeat the same setup on each one trivially.

For example: a few days ago, I used a 4-core machine with 16G RAM. Today, I used a 32-core machine with 64G RAM to speed place-and-route up. This is all completely transparent for you as the user.

Remote programming

You may not have noticed this either. The hardware FPGA board that was being programmed lives on my desk at home. While the build process happens on GCP, I use the remote programming approach I developed to apply a bitstream built in the Cloud to my device.

This happens automatically, without any special action on your part. The workflow simply does the right thing.

This remote programming approach is far superior to that described on AMD’s website itself. The approach they describe uses an unsecured connection to a co-located other machine.

My setup uses the secure and fully encrypted SSH tunneling across the US continent.

Vendored software installation

While vendored software installation is also made hermetic, I could not have it be prepared by bazel itself. This is for two reasons:

Since the entire installation is in GCP’s Cloud, if you need more storage, you can add more storage in seconds.

However, barring the above issues, the approach is unchanged, for any piece of software you may need to use.

Remote everything

I have my FPGA powered by a smart plug. This means that I can control its power on and off remotely. With the GCP-based remote setup, I am able to do the following:

Conclusion

I hope you can appreciate the streamlined process outlined above. Its interesting features are:

If you want to share your comments with me, consult the contact section at the bottom of my home page