This blog extends my TensorFlow Lite for Microcontrollers tutorial. I hope this blog helps you in debugging your TinyML applications.
1. Pointer-related errors
2. Tensor_arena-related errors
3. Model-specific setup function-related errors
4. Miscellaneous errors
1. Pointer-related errors:1.a Not declaring a pointer to the model - Compile time error
Some notes:
This pointer holds the reference to a location in memory where the model can be found. The same will be used to create the interpreter for our TinyML application.
Error message:
error: 'tflModel' was not declared in this scope tflModel = tflite::GetModel(model);
Solution:
Declaring a pointer to the model, using the line of code below, would solve this error.
const tflite::Model* tflModel;
1.b Not declaring a pointer to the error_reporter- Compile time error
Some notes:
AMicro Error Reporter is defined to provide a mechanism for logging debug information during inference. The interpreter uses the error reporter to print any errors it encounters. This pointer holds the reference to a location in memory where the error_reporter can be found.
Error message:
error: 'tflErrorReporter' was not declared in this scope tflErrorReporter = µ_error_reporter;
Solution:
Declaring a pointer to the model, using the line of code below, would solve this error.
tflite::ErrorReporter* tflErrorReporter;
1.c Not declaring a pointer to the input tensor- Compile time error
Some notes:
The input tensor is used to load data into a TinyML model. This pointer holds the reference to a location in memory where the input tensor can be found.
Error message:
error: 'tflInputTensor' was not declared in this scope tflInputTensor = tflInterpreter->input(0);
Solution:
Declaring a pointer to the input tensor, using the line of code below, would solve this error.
TfLiteTensor* tflInputTensor;
1.d Not declaring a pointer to the output tensor- Compile time error
Some notes:
The output tensor is used to access the TinyML model's output. This pointer holds the reference to a location in memory where the output tensor can be found.
Error message:
error: 'tflOutputTensor' was not declared in this scope tflOutputTensor = tflInterpreter->output(0);
Solution:
Declaring a pointer to the output tensor, using the line of code below, would solve this error.
TfLiteTensor* tflOutputTensor;
1.e Not declaring Pointer to the interpreter- Compile time error
Some notes:
The interpreter is the piece of code that will execute our model on the data we provide. This pointer holds the reference to a location in memory where the interpreter can be found.
Error message:
error: 'tflInterpreter' was not declared in this scope tflInterpreter = &static_interpreter;
Solution:
Declaring a pointer to the interpreter, using the line of code below, would solve this error.
tflite::MicroInterpreter* tflInterpreter;
2. Tensor Arena-related errors:2.a Not declaring Tensor_arena- Compile time error
Some notes:
The Tensor arena is an area of memory that stores the model’s input, output, and intermediate tensors.
Error message:
error: 'tensorArena' was not declared in this scope static tflite::MicroInterpreter static_interpreter(tflModel, micro_mutable_op_resolver, tensorArena, tensorArenaSize, tflErrorReporter);
Solution:
Declaring a Tensor_arena array, using the line of code below, would solve this error.
constexpr int tensorArenaSize = 102 * 1024;
uint8_t tensorArena[tensorArenaSize];
2.b Size of TensorArena is too large
Some notes:
This error occurs when the size of the allocated Tensor Arena is more than what's physically available on the microcontroller.
Error message:
c:/users/vishw/appdata/local/arduino15/packages/esp32/tools/xtensa-esp32-elf-gcc/1.22.0-97-gc752ad5-5.2.0/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/bin/ld.exe: C:\Users\vishw\AppData\Local\Temp\arduino_build_31234/unity_game_final_final_2.ino.elf section `.dram0.bss' will not fit in region `dram0_0_seg'
c:/users/vishw/appdata/local/arduino15/packages/esp32/tools/xtensa-esp32-elf-gcc/1.22.0-97-gc752ad5-5.2.0/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/bin/ld.exe: DRAM segment data does not fit.
c:/users/vishw/appdata/local/arduino15/packages/esp32/tools/xtensa-esp32-elf-gcc/1.22.0-97-gc752ad5-5.2.0/bin/../lib/gcc/xtensa-esp32-elf/5.2.0/../../../../xtensa-esp32-elf/bin/ld.exe: region `dram0_0_seg' overflowed by 610936 bytes
collect2.exe: error: ld returned 1 exit status
exit status 1
Error compiling for board ESP32 Dev Module.
Solution:
To solve this issue, try to reduce the size of tensorArenaSize until the compile time error goes away.
constexpr int tensorArenaSize = 102 * 1024;
2.c Size of TensorArena is too small
Some notes:
This occurs when the size of the allocated Tensor Arena is less than what's required by the model to store the input, output, and intermediate values.
Error message:
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1216
ho 0 tail 12 room 4
load:0x40078000,len:10944
load:0x40080400,len:6388
entry 0x400806b4
Guru Meditation Error: Core 1 panic'ed (StoreProhibited). Exception was unhandled.
Core 1 register dump:
PC : 0x400d2ee6 PS : 0x00060330 A0 : 0x800d332f A1 : 0x3ffb1ee0
A2 : 0x3ffbff20 A3 : 0x00000000 A4 : 0x00000000 A5 : 0x00000000
A6 : 0x00000000 A7 : 0x0000000a A8 : 0x800d2ee6 A9 : 0x3ffb1ec0
A10 : 0x3f400341 A11 : 0x3f4002d9 A12 : 0x00000014 A13 : 0x00000410
A14 : 0x3ffc1364 A15 : 0x00000000 SAR : 0x00000019 EXCCAUSE: 0x0000001d
EXCVADDR: 0x00000000 LBEG : 0x4000c2e0 LEND : 0x4000c2f6 LCOUNT : 0x00000000
ELF file SHA256: 0000000000000000
Backtrace: 0x400d2ee6:0x3ffb1ee0 0x400d332c:0x3ffb1f60 0x400d1096:0x3ffb1f80 0x400d62c2:0x3ffb1fb0 0x400869bd:0x3ffb1fd0
Rebooting...
Solution:
To solve this issue, try incrementally increasing the size of the allocated memory by changing the value of the variable below.
constexpr int tensorArenaSize = 102 * 1024;
3. Model-specific setup function-related errors:3.a Not setting up Error_reporter
Some notes:
An Error Reporter is defined to provide a mechanism for logging debug information during inference. The code will compile and run just fine if there are no errors. But if there are errors, they might crash the program.
Error message:
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1216
ho 0 tail 12 room 4
load:0x40078000,len:10944
load:0x40080400,len:6388
entry 0x400806b4
Guru Meditation Error: Core 1 panic'ed (LoadProhibited). Exception was unhandled.
Core 1 register dump:
PC : 0x400ed421 PS : 0x00060530 A0 : 0x800d0ef6 A1 : 0x3ffb1f30
A2 : 0x00000000 A3 : 0x3f400127 A4 : 0x00000001 A5 : 0x00000003
A6 : 0x00000003 A7 : 0x00000000 A8 : 0x800d3ac5 A9 : 0x3ffb1f30
A10 : 0x00000000 A11 : 0x0000006a A12 : 0x00000007 A13 : 0x00000001
A14 : 0x00000000 A15 : 0x3ffb84b0 SAR : 0x00000019 EXCCAUSE: 0x0000001c
EXCVADDR: 0x00000000 LBEG : 0x4000c2e0 LEND : 0x4000c2f6 LCOUNT : 0x00000000
ELF file SHA256: 0000000000000000
Backtrace: 0x400ed421:0x3ffb1f30 0x400d0ef3:0x3ffb1f80 0x400d570e:0x3ffb1fb0 0x400869bd:0x3ffb1fd0
Solution:
Setting up an error reporter using the below lines of code should solve the issue.
static tflite::MicroErrorReporter micro_error_reporter;
tflErrorReporter = µ_error_reporter;
3.b Not mapping the ML model using the GetModel() method
Some notes:
The model data array (defined in the header file) is passed into a method named GetModel(). This method returns a Model pointer assigned to a variable named model. This variable represents our model. The type Model is a struct, which in C++ is very similar to a class. It’s defined in schema_generated.h, which holds our model’s data and allows us to query information about it.
const tflite::Model* tflModel; /* This line declares a pointer to the model data array */
tflModel = tflite::GetModel(model); /* this line of code assigns the pointer with the location to the model, this is the line you need to focus on for now */
Error message:
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1216
ho 0 tail 12 room 4
load:0x40078000,len:10944
load:0x40080400,len:6388
entry 0x400806b4
Guru Meditation Error: Core 1 panic'ed (LoadProhibited). Exception was unhandled.
Core 1 register dump:
PC : 0x400f02bf PS : 0x00060330 A0 : 0x800d2c78 A1 : 0x3ffb1ee0
A2 : 0x00000000 A3 : 0x00000008 A4 : 0x00060320 A5 : 0x00000001
A6 : 0x00060323 A7 : 0x00000000 A8 : 0x80088a79 A9 : 0x3ffb1ec0
A10 : 0x00000001 A11 : 0x3ffb8058 A12 : 0x00000001 A13 : 0x00000001
A14 : 0x00060323 A15 : 0x00000000 SAR : 0x00000019 EXCCAUSE: 0x0000001c
EXCVADDR: 0x00000000 LBEG : 0x4000c46c LEND : 0x4000c477 LCOUNT : 0x00000000
ELF file SHA256: 0000000000000000
Backtrace: 0x400f02bf:0x3ffb1ee0 0x400d2c75:0x3ffb1f00 0x400d2cb3:0x3ffb1f20 0x400d36fe:0x3ffb1f50 0x400d1252:0x3ffb1f80 0x400d756a:0x3ffb1fb0 0x400869bd:0x3ffb1fd0
Solution:
Mapping the ML model using the GetModel() method using the below lines of code should solve the issue.
tflModel = tflite::GetModel(model);
3.c TF Lite versions don't match
Some notes:
if (model->version() != TFLITE_SCHEMA_VERSION) {
error_reporter->Report( "Model provided is schema version %d not equal " "to supported version %d.\n", model->version(), TFLITE_SCHEMA_VERSION);
}
Using the above lines of code, we compare the model’s version number to TFLITE_SCHEMA_VERSION, which indicates the version of the TensorFlow Lite library we are currently using. If the numbers match, our model was converted with a compatible version of the TensorFlow Lite Converter. It’s good practice to check the model version because a mismatch might result in strange behavior that is tricky to debug.
Error message:
"Model provided is schema version "model->version()" not equal to supported version "TFLITE_SCHEMA_VERSION"
Solution:
Changing the TensorFlow Lite version that is used to convert the model to a compatible version should solve the problem.
3.d Not creating an Op resolver
Some notes:
The Ops Resolver class contains the operations that are available to TensorFlow Lite for Microcontrollers and is able to provide them to the interpreter.
Error message:
error: 'micro_mutable_op_resolver' was not declared in this scope static tflite::MicroInterpreter static_interpreter(tflModel, micro_mutable_op_resolver, tensorArena, tensorArenaSize, tflErrorReporter);
Solution:
Creating an Op Resolver using either of the lines of code should solve the issue.
// Micro mutable Ops Resolver
static tflite::MicroMutableOpResolver micro_mutable_op_resolver;
// All Ops Resolver
static tflite::ops::micro::AllOpsResolver resolver;
3.e Not registering the necessary operations for OpResolver
Some notes:
There are two resolvers in TF Lite Micro: The AllOpsResolver and the Micro Ops Resolver. The difference between the resolvers is that AllOps Resolver, by default, contains all the operations available to TFLite Micro. In contrast, we must register the operations necessary while using the Micro Ops Resolver. The benefit of using a MicroOpsResolver is reduced sketch size compared to the AllOpsResolver.
Error message:
Didn't find op for builtin opcode 'FULLY_CONNECTED' version '1'
Failed to get registration from op code d
Solution:
Registering the necessary operations with lines of code similar to the ones below should solve the error.
micro_mutable_op_resolver.AddBuiltin(
tflite::BuiltinOperator_FULLY_CONNECTED,
tflite::ops::micro::Register_FULLY_CONNECTED());
3.f Not declaring an interpreter
Some notes:
The Interpreter is the piece of code that will execute our model on the data we provide.
Error message:
Rebooting...
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1216
ho 0 tail 12 room 4
load:0x40078000,len:10944
load:0x40080400,len:6388
entry 0x400806b4
Guru Meditation Error: Core 1 panic'ed (LoadProhibited). Exception was unhandled.
Core 1 register dump:
PC : 0x400d317a PS : 0x00060330 A0 : 0x800d0fd1 A1 : 0x3ffb1f60
A2 : 0x00000000 A3 : 0x00000048 A4 : 0x3ffbdbb8 A5 : 0x00000002
A6 : 0x00000001 A7 : 0x3ffbfed8 A8 : 0x00000009 A9 : 0x3ffb1f40
A10 : 0x3ffbfedc A11 : 0x3ffbdbd8 A12 : 0x00000020 A13 : 0x3ffbfefc
A14 : 0x00000000 A15 : 0x00000000 SAR : 0x00000019 EXCCAUSE: 0x0000001c
EXCVADDR: 0x00000008 LBEG : 0x4000c2e0 LEND : 0x4000c2f6 LCOUNT : 0x00000000
ELF file SHA256: 0000000000000000
Backtrace: 0x400d317a:0x3ffb1f60 0x400d0fce:0x3ffb1f80 0x400d60b6:0x3ffb1fb0 0x400869bd:0x3ffb1fd0
Solution:
Declaring an interpreter using the lines of code below should solve the issue.
static tflite::MicroInterpreter static_interpreter(tflModel, micro_mutable_op_resolver, tensorArena, tensorArenaSize, tflErrorReporter);
tflInterpreter = &static_interpreter;
3.g Not allocating tensors
Some notes:
The allocate_tensors() is used to allocate memory from the tensor_arena for the model's tensors. The CPU crashes when inference is invoked.
Error message:
Guru Meditation Error: Core 1 panic'ed (StoreProhibited). Exception was unhandled.
Core 1 register dump:
PC : 0x400d113c PS : 0x00060930 A0 : 0x800d121a A1 : 0x3ffb1f30
A2 : 0x00000000 A3 : 0x00000000 A4 : 0x3ffd27a4 A5 : 0x3ffd27b4
A6 : 0x3ffd25a4 A7 : 0x00000000 A8 : 0x800d113a A9 : 0x3ffb1f20
A10 : 0xc2c74000 A11 : 0xc058e800 A12 : 0x00000000 A13 : 0x42c74000
A14 : 0x80000000 A15 : 0x80000000 SAR : 0x0000001d EXCCAUSE: 0x0000001d
EXCVADDR: 0x00000000 LBEG : 0x4000c349 LEND : 0x4000c36b LCOUNT : 0xffffffff
ELF file SHA256: 0000000000000000
Backtrace: 0x400d113c:0x3ffb1f30 0x400d1217:0x3ffb1f50 0x400d12de:0x3ffb1f70 0x400d1363:0x3ffb1f90 0x400d62e1:0x3ffb1fb0 0x400869bd:0x3ffb1fd0
Rebooting...
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1216
ho 0 tail 12 room 4
load:0x40078000,len:10944
load:0x40080400,len:6388
entry 0x400806b4
Solution:
Allocating tensors using the line of code below should solve this issue.
tflInterpreter->AllocateTensors();
3.h Not assigning input and output tensors
Some notes:
The input and output tensors are used to load data into the model and read inference output, respectively. The CPU crashes when inference is invoked.
Error message:
Guru Meditation Error: Core 1 panic'ed (LoadProhibited). Exception was unhandled.
Core 1 register dump:
PC : 0x400d1126 PS : 0x00060730 A0 : 0x800d121a A1 : 0x3ffb1f30
A2 : 0x3ffc0f98 A3 : 0x00000000 A4 : 0x3ffd27a4 A5 : 0x3ffd27b4
A6 : 0x00000000 A7 : 0x00000002 A8 : 0x800d111e A9 : 0x3ffb1f20
A10 : 0x00000000 A11 : 0xc09d2000 A12 : 0x00000000 A13 : 0x3fb00000
A14 : 0x7ff00000 A15 : 0x00000409 SAR : 0x00000007 EXCCAUSE: 0x0000001c
EXCVADDR: 0x00000004 LBEG : 0x4000c349 LEND : 0x4000c36b LCOUNT : 0xffffffff
ELF file SHA256: 0000000000000000
Backtrace: 0x400d1126:0x3ffb1f30 0x400d1217:0x3ffb1f50 0x400d12de:0x3ffb1f70 0x400d1363:0x3ffb1f90 0x400d62c1:0x3ffb1fb0 0x400869bd:0x3ffb1fd0
Rebooting...
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1216
ho 0 tail 12 room 4
load:0x40078000,len:10944
load:0x40080400,len:6388
entry 0x400806b4
Solution:
Assigning input and output tensors using the lines of code below should solve this issue.
tflInputTensor = tflInterpreter->input(0);
tflOutputTensor = tflInterpreter->output(0);
4. Miscellaneous Errors4.a Empty Model.h file
Some notes:
One cause of this error is when you don't install the xxd package to convert the TensorFlow model into a TensorFlow Lite model. The xxd creates a hex dump of a given file or standard input and vice versa.
Error message:
ets Jun 8 2016 00:22:57
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:1216
ho 0 tail 12 room 4
load:0x40078000,len:10944
load:0x40080400,len:6388
entry 0x400806b4
Guru Meditation Error: Core 1 panic'ed (LoadProhibited). Exception was unhandled.
Core 1 register dump:
PC : 0x400f02eb PS : 0x00060530 A0 : 0x800d2ca4 A1 : 0x3ffb1ee0
A2 : 0xadab6fb3 A3 : 0x00000008 A4 : 0x00060520 A5 : 0x00000001
A6 : 0x00060523 A7 : 0x00000000 A8 : 0x80088a79 A9 : 0x3ffb1ec0
A10 : 0x00000001 A11 : 0x3ffb8058 A12 : 0x00000001 A13 : 0x00000001
A14 : 0x00060523 A15 : 0x00000000 SAR : 0x00000019 EXCCAUSE: 0x0000001c
EXCVADDR: 0xadab6fb3 LBEG : 0x4000c46c LEND : 0x4000c477 LCOUNT : 0x00000000
ELF file SHA256: 0000000000000000
Backtrace: 0x400f02eb:0x3ffb1ee0 0x400d2ca1:0x3ffb1f00 0x400d2cdf:0x3ffb1f20 0x400d372a:0x3ffb1f50 0x400d127f:0x3ffb1f80 0x400d7596:0x3ffb1fb0 0x400869bd:0x3ffb1fd0
Rebooting...
Solution:
Installing the xxd package, using the line of code below, should solve this issue.
!apt-get -qq install xxd
5. ConclusionI thank my GSoC mentor, Paul Ruiz, for guiding me throughout the project!
Comments
Please log in or sign up to comment.