Through out these projects we have examined how we can work with and create projects for a range of applications. In this project we are going to take a slightly different track I am going to demonstrate how we can connect a PYNQ-Z2 with a Matrix Voice.
The Matrix Voice contains a number of LEDs, microphones, DAC, and of course, a Xilinx Spartan 6 FPGA. This Spartan 6 contains the functionality which enables us in normal operation to connect the Matrix Voice with the Raspberry Pi.
It will also allow us to connect to the Matrix Voice to the PYNQ Z2.
What Is the VoiceThe Voice is a development board intended to allow creation of audio, applications. To do this it contains
- 18 digitally controlled LEDs which can be set to any RGB combination
- 8 microphones for beam forming
- 3 Watt audio output
- Spartan 6 FPGA - contains system interfacing and filtering (CiC and FIR)
- Supporting the FPGA are 64 Mb of flash and 512 Mb of memory
This generates a system architecture like can be seen below, with the PI or PYNQ Z2 communicating with the peripherals on the Matrix Voice using a SPI bus.
Internally as we will see in a little while the FPGA internally a wishbone interface to exchange data between the peripherals and the SPI interface.
The Matrix Voice connects with the PYNQ Z2 over the RPI connector and faces outwards from the board to ensure Pin 1's align.
Once the two are connected we can then power on the PYNQ Z2.
When we first power on the Matrix Voice and it is connected to the PYNQ Z2 you will see all but one LED illuminate the same color. The first LED will be off, this is left over from the QA testing at Matrix and we need to upload the latest firmware.
To do this we have two choices connect the Matrix Voice to a Raspberry Pi and Flash the latest bit stream or upload it the Xilinx ISE tool chain.
As I have both I will demonstrate both methods
Preparing the Voice - JTAGIf you are unsure how to set up the ISE environment for Spartan 6 FPGAs please check here.
To connect the JTAG programmer to the Matrix Voice and supply power we need disconnect it from the PYNQ Z2 and connect to it using a JTAG pod we need to use a number of flying leads.
We use the pin out above to check the
As we are going to program a device which is not officially supported by impact we need to open a terminal window in the VM and enter the following command
XIL_IMPACT_SKIPIDCODECHECK=1
Once this has been entered we can open impact from the same terminal window by typing impact
Once impact is open, we can scan the boundary scan chain and we should see the Xilinx Spartan 6 LX9 device as below
We can download the programming files from the FPGA here, they are under the blob directory.
https://github.com/matrix-io/matrix-creator-init
We want to update the Non Volatile boot SPI memory which is connected to FPGA. We cannot use a bit file to program this we need instead to use a MCS file
To convert the bit file into a MCS file we can use the IMPACT Prom option- we have a Macronix MX25L 64Mb memory connected to the FPGA.
Once the Prom is correctly set we can add the bit file and generate the MCS File
Once the Prom file has been generated we can program the SPI memory, as the ML25 does not appear in the list select the NQ25 of the appropriate size for the download.
Preparing the Voice - Raspberry PIIf you do not have a JTAG programming device it is possible to use a Raspberry PI to update the initial FPGA configuration stored in the SPI Flash.
If you are opting for this method connect the PI with the voice and then follow the instructions outlined here
https://matrix-io.github.io/matrix-documentation/matrix-voice/resources/fpga/
Regardless of which option you follow once this stage has been completed you should see all of the lights on the Matrix Voice turned off opposed to the initial majority on yellow.
Look Inside the FPGAIf you are developing the Matrix Voice with a Raspberry PI there exist several packages which are provided for use with the peripherals.
Sadly these are not suitable for use with the PYNQ due to the architecture of the PYNQ. It is actually a real time MicroBlaze which does the interfacing with the Matrix Voice via the RPI connector on the PYNQ Z2.
As such we need to understand a little about the lower level interfacing, we can do this using the Matrix GitHubs which describe the FPGA design
Internally the FPGA runs at 200 MHz and as I mentioned above communicates with PYNQ Z2 using a SPI interface.
To correctly be able to interface with the Matrix One we need to therefore now
1) The SPI bus settings for CPOL and CPHA.
2) The SPI Data and Addressing format.
3) The Endianness of the SPI Data.
4) The Memory Map of the system peripherals.
To help us find this information we can read the code but also rather helpfully the code also comes with a test bench which can be run in a logic simulator.
To work with the code we need to clone the Git Repo which contains the Voice FPGA design
https://github.com/matrix-io/matrix-voice-fpga.git
With the RTL cloned we can begin to understand the code architecture. The best way to do this is by simulating it. In this instance as it is just a RTL simulation I used Vivado as it saves having to fire up the ISE virtual machine
This test bench generates a number of read and writes over the SPI bus during the initial set up of the system. This can be seen below in the waveform captures.
Observing the SPI transactions zoomed in shows, SCLK idles high (CPOL =1) and the data changes on the falling edge of SCLK while the receiving module captures the data on the rising edge (CPHA=1).
This enables us to answer point one of the interfacing questions above.
From this simulation and by reviewing the code we can also determine the address data format of the SPI transaction.
Each SPI transaction is 16 bits, the first word transferred contains the 15 bits of data and a read or write bit. When the the read / write bit is low the command is a write command.
The format of the packet is
Address[15:0], Read_nWrite
Following packets contain data bytes either being written or read from the Matrix Voice, knowing this format answers question two above.
To answer the third question regarding endianness we can also obtain this from the simulation.
The implementation uses a little endian approach, in which the lest significant byte is sent or received first.
To verify the endianness of the data transfer, we can change the test bench to use addresses we will be using to ensure the correctness of the endianness.
initial begin: TEST_CASE
#250 -> reset_trigger;
#0 CS_SELECT <= 1'b0;
#0 pdm_data <= 8'hAA;
@ (reset_done_trigger);
#500
spi_transfer_pi(15'h0000, 16'd1, read);
spi_transfer_pi(15'h0001, 16'd0, read);
spi_burst_transfer(15'h2000,5'd17,write);
We can answer the fourth question by looking at the top level of the verilog code, here you will find the address map is
0x0000 - BRAM - Version ID and DAC settings
0x2000 - Microphone Array
0x3000 - LED arrays
0x6000 - DAC FIFO
0x4000 - GPIO
The exact registers at these locations can be found by reading the Matrix Hardware Abstraction Layer (HAL)
If you want to explore the code more, you will gain more understanding of how the entire design works. For example the SPI slave state machine as demonstrated below.
One of the main elements of the design we want to use is the LED ring, this is controlled via a BRAM which contains the colors for the LEDs. The SPI can then change the color of the LEDs by writing to those BRAM locations.
A similar approach is taken for the DAC however a FIFO is used in place of the BRAM for samples while control also takes place using the BRAM module in the FPGA.
While samples from the microphones are sampled and streamed via the wishbone interface.
Now we understand how we communicate with it we are able to write a application on the PYNQ Z2.
Working with the RPI Interface in the PYNQAs mentioned in the introduction the RPI is connected to the PYNQ Z2 using a MicroBlaze IO Processor (IOP). This IOP provides a range of interfaces which can be connected to the 40-Pin RPI header.
As you can see in the diagram below, we can connect any combination of timers, SPI, IIC, GPIO, UART to the RPI connector using the configurable switch.
RPI IOP (📷: https://pynq.readthedocs.io/en/v2.4/pynq_libraries/rpi.html)
This means when we want to connect to an external device using the RPI IOP we have a wide range of choice interfacing standards.
As interfaces are already implemented in the PL, we do have a few constraints on the configuration of the different interfaces.
- SPI - Master 8-bit transfer, 6.25 MHz, 16 deep FIFO
- I2C - 100KHz, 7-bit addressing
- UART - 115200 Baud
- GPIO - 28 outputs
- AXI timer - 1 PWM output
Like always, we start in the Jupyter notebook where we will first download the base overlay so that it's capabilities are available to us.
As the RPI header shares pins with PmodA connector, we must also configure which of the interfaces we wish to use. We can do this using the function:
base.select_rpi()
When it comes to developing our application in the Jupyter notebook, we can use the ipython magic command of %% to target the RPI MicroBlaze.
So far so good, though like all MicroBlazes we need a BSP which allows us to drive the peripherals, but we also need additional source code to integrate the IOP and switch functionality.
Rather helpfully each of the IOP implemented, e.g. Pmod, Arduino and RPI have their own library installed as part of PYNQ lib.
We can then write a number of applications in the IOP which will allow us to work with the PYNQ Z2 and the Matrix Voice.
The first thing to do is to create a simple example which allows us to read and write over the SPI interface for example.
Setting this interface up to write in 0x11 0x22 to address 0x2 in the BRAM of the Matrix voice shows the results below.
The read function returns a value of 4386 which is 0x11 0x22 in hexadecimal.
Knowing we can read and write the interface means we can develop the application
The ApplicationThe first application I am creating for the Matrix Voice will provide several basic functions for reading and writing which can be used with the LED ring and Microphone and DAC.
The first thing we need to do is create a function to set up the SPI interface in the correct mode.
unsigned int set_up_spi(){
device = spi_open_device(1);
device = spi_open(11, 9, 10, 8);
spi_configure(device,1,1);
}
The second thing we need to do is create reading and writing functions for the SPI interface. These functions will take account of the read write setting and the address translation necessary for a 15 bit address.
As such to use these functions we define the address as per the memory map, the read function can be seen below.
unsigned int read_spi(int addr){
u8 tx[4];
u8 rx[4];
tx[0] = ((u8) addr )| 0x01; //set the lsb for read
tx[1] = (u8) (addr>>8);
tx[2] = 0x00;
tx[3] = 0x00;
rx[0] = 0;
rx[1] = 0;
rx[2] = 0;
rx[3] = 0;
spi_transfer(device,(char*)tx, (char*)rx,4);
return (unsigned int) ((rx[2] << 8) + rx[3]);
}
The corresponding write function is shown below also.
unsigned int write_spi(int addr, int data){
u8 tx[4];
u8 rx[4];
u16 int_addr;
int_addr = (u16) addr <<1; //shift to make 15 bit address
tx[0] = ((u8) int_addr )| 0x00; //set the lsb for write
tx[1] = (u8) (int_addr>>8);
tx[2] = (u8) data;
tx[3] = (u8) (data >> 8);
rx[0] = 0;
rx[1] = 0;
rx[2] = 0;
rx[3] = 0;
spi_transfer(device,(char*)tx, (char*)rx,4);
return (unsigned int) ((rx[2] << 8) + rx[3]);
}
Each LED requires a 32 bit number, as such two 16 bit address locations are needed for each LED. As such each address is organised in the following memory structure.
0x00 - Green / Red
0x01 - White / Blue
def led_light(led, wrgb):
addr = 0x3000 + (led*2)
#print(wrgb)
byte = wrgb.to_bytes(4,'big')
low_data = (byte[2] <<8) | byte[3]
high_data = (byte[0] <<8) | byte[1]
#print("x=",high_data)
write_spi(addr,low_data)
write_spi(addr+1,high_data)
This function can then be called by a simple python loop to make a blue dot whizz around the interface.
import time
while True:
for x in range(0,18):
#print(x)
if x == 0:
led_light(17,0x00000000)
elif x >= 1:
led_light((x-1),0x00000000)
led_light(x,0x001f0000)
time.sleep(0.1)
The image below shows the dot as it progresses around
The next application is to make the dot change direction depending on the sounds picked up from the microphones. I will demonstrate that in another project soon as I need to read more of the source code to configure the microphones.
Wrapping UpNow we have the ability to start working the Microphones and higher level frame works such as Alexa and google voice using the PYNZ Z2. Of course we have the programmable logic as well which means we can accelerate signal filtering and analysis, machine learning inference etc in real time.
See previous projects here.
Additional Information on Xilinx FPGA / SoC Development can be found weekly on MicroZed Chronicles.
Comments
Please log in or sign up to comment.